US9720734B2 - Multi-host configuration for virtual machine caching - Google Patents
Multi-host configuration for virtual machine caching Download PDFInfo
- Publication number
- US9720734B2 US9720734B2 US14/925,948 US201514925948A US9720734B2 US 9720734 B2 US9720734 B2 US 9720734B2 US 201514925948 A US201514925948 A US 201514925948A US 9720734 B2 US9720734 B2 US 9720734B2
- Authority
- US
- United States
- Prior art keywords
- virtual machine
- computing device
- machine element
- machine elements
- host
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0888—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
Definitions
- the disclosed embodiments relate generally to configuration of memory for virtual machines and for networked computing devices hosting virtual machine elements.
- a virtual machine may utilize flash memory caching for improved efficiency compared with conventional hard drive storage techniques.
- improvements to efficiency may include increased read speeds, increased storage write speeds, reduced storage input/output (I/O) contention, and reduced storage network traffic.
- I/O storage input/output
- configuring those virtual machines for caching can be a significant burden on administrators of such systems.
- the embodiments disclosed herein allow a server computer to assign various caching modes to virtual machine elements in accordance with a received policy.
- the time and labor involved in configuring caching for virtual machine elements may be significantly reduced by a system capable of determining when a virtual element requires configuration (e.g., when a host computing device that hosts the virtual element comes online) and configuring memory for the virtual machine element in accordance with a received policy.
- Policy-based memory configuration for virtual machine elements allows a system administrator to apply a specific caching mode to a particular set of virtual machine elements.
- FIG. 1 is a block diagram illustrating an implementation of a distributed system, in accordance with some embodiments.
- FIG. 2A is a block diagram illustrating an implementation of an application server system, in accordance with some embodiments.
- FIG. 2B is a block diagram illustrating an implementation of a host system, in accordance with some embodiments.
- FIGS. 3A-3B illustrate flowchart representations of a method of configuring a plurality of memory caches, in accordance with some embodiments.
- a method for configuring a plurality of memory caches is performed by a server computing device.
- the method includes receiving or accessing a storage policy including a first caching mode for a first set of one or more virtual machine elements and a second caching mode for a second set of one or more virtual machine elements.
- the one or more virtual machine elements of the first set are different from the one or more virtual machine elements of the second set.
- the method further includes determining that a virtual machine element, hosted by a first host computing device, requires configuration.
- the method further includes, in response to determining that the virtual machine element requires configuration, determining whether the virtual machine element is a virtual machine element of a the first set of one or more virtual machine elements or the second set of one or more virtual machine elements.
- the method further includes, in response to determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements, applying the first caching mode to a section of a logical solid state drive associated with the virtual machine element, and in response to determining that the virtual machine element is a virtual machine element of the second set of one or more virtual machine elements, applying the second caching mode to the section of the logical solid state drive associated with the virtual machine element.
- the server computing device is distinct from the first host computing device that hosts the virtual machine element.
- At least one of the first caching mode or the second caching mode is a write-back mode.
- At least one of the first caching mode or the second caching mode is a write-through mode.
- the storage policy includes (e.g., specifies) a first section size for the first set of one or more virtual machine elements, and a second section size for the second set of one or more virtual machine elements.
- At least one of the first section size or the second section size is an indication of a number of sections.
- At least one of the first section size or the second section size is an indication of a proportion of storage size for a logical solid state drive.
- determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements includes determining that identifying information for a virtual machine element matches first information for the first set of one or more virtual machine elements or matches second information for the second set of one or more virtual machine elements.
- the method further includes establishing a host pair between the first host computing device, which hosts the virtual machine element, and a second host computing device that is different from the first host computing device, wherein, in accordance with the established host pair, data cached for the virtual machine element by the first host computing device is also cached by the second host computing device.
- the method further includes storing identifying information for the first host computing device in association with identifying information for the second host computing device.
- the storage policy specifies the first caching mode as a default policy for virtual machine elements hosted by a first cluster of host computing devices, and further specifies a different caching mode than the first caching mode for a specified set of virtual machine elements hosted by the first cluster of host computing devices.
- a server computing device includes memory, a one or more hardware processors, and one or more modules, stored in said memory and configured for execution by the one or more hardware processors, wherein the one or more modules, when executed by the one or more processors, cause the server computing device to perform the method of any of A1-A10.
- a server computing device includes memory means for receiving or accessing a storage policy including a first caching mode for a first set of one or more virtual machine elements and a second caching mode for a second set of one or more virtual machine elements, wherein the one or more virtual machine elements of the first set are different from the one or more virtual machine elements of the second set.
- the server computing device further includes means for determining that a virtual machine element, hosted by a first host computing device, requires configuration and means for determining, in response to said determining that the virtual machine element requires configuration, whether the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements.
- the server computing device further includes means for applying the first caching mode to a section of a logical solid state drive associated with the virtual machine element in response to determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements, and means for applying the second caching mode to the section of the logical solid state drive associated with the virtual machine element in response to determining that the virtual machine element is a virtual machine element of the second set of one or more virtual machine elements.
- the server computing device is distinct from the first host computing device that hosts the virtual machine element.
- the server computing device further comprises means for performing the method of any one of A1-A10.
- a non-transitory computer-readable storage medium stores one or more programs configured for execution by one or more processors of a server computing device, the one or more programs including instructions for performing the method of any one of A1-A10.
- FIG. 1 is a block diagram illustrating an implementation of a distributed system 100 , in accordance with some embodiments. While some example features are illustrated, various other features have not been illustrated for the sake of brevity and so as not to obscure pertinent aspects of the example embodiments disclosed herein.
- distributed system 100 includes an application server system 110 connected to a plurality of host systems 120 (e.g., 120 a - 120 m ) through a communication network 130 such as the Internet, other wide area networks, local area networks, metropolitan area networks, wireless networks, or any combination of such networks.
- application server system 110 is connected to a plurality of clusters 140 (e.g., 140 a - 140 c ) through communication network 130 .
- a cluster such as cluster 140 c , includes a plurality of host systems, e.g., host systems 120 a - 120 m.
- application server system 110 includes policies 112 (e.g., policies indicating caching modes to be used by one or more sections of a respective persistent cache 122 of a respective host system 120 ).
- application server system 110 receives policies from different computing devices. For example, in some embodiments, a policy created by a system administrator using a graphical user interface executing on a remote computing device may be received by application server system 110 from the remote computing device. In other embodiments, a policy may be created using application server system 110 and/or a default policy may be stored by application server system 110 , or accessed by application server system 110 from a predefined location (e.g., a predefined logical location at a remotely located server).
- a predefined location e.g., a predefined logical location at a remotely located server.
- application server system 110 includes I/O driver(s) 114 .
- a respective I/O driver 114 is communicated to a respective host system 120 for execution by the respective host system 120 , as explained further below.
- application server system 110 communicates with host systems 120 using host communication module 116 .
- host communication module includes instructions for communicating, via communication network 130 , a respective I/O driver 114 to a respective host system 120 .
- application server system 110 includes a host configuration module 118 for configuring one or more sections of a respective persistent cache 122 of a respective host system 120 , e.g., in accordance with a respective policy 112 .
- a respective host system 120 executes a plurality of virtual machines 126 (e.g., 126 a - 126 v ) and includes a respective persistent cache 122 shared by the plurality of virtual machines 126 executed on the respective host system 120 .
- persistent cache 122 e.g., 122 a - 122 m
- persistent cache 122 includes non-volatile solid state storage, such as flash memory.
- persistent cache 122 is a single flash memory device while in other embodiments persistent cache 122 includes a plurality of flash memory devices.
- a persistent cache 122 is a logical solid state drive (LSSD) that provides storage capacity on one or more solid state devices which are accessible as one logical unit.
- LSSD logical solid state drive
- LSSD as used herein may also refer to a solid state drive (SSD).
- a flash memory device includes one or more flash memory die, one or more flash memory packages, one or more flash memory channels or the like.
- persistent cache 122 is NAND-type flash memory or NOR-type flash memory.
- persistent cache 122 includes one or more three-dimensional (3D) memory devices.
- persistent cache 122 includes a solid-state drive (SSD) controller.
- SSD solid-state drive
- other types of storage media e.g., PCRAM, ReRAM, STT-RAM, etc.
- host systems 120 execute I/O drivers 124 (e.g., 124 a - 124 m ).
- I/O driver 124 may be executed on a respective host system 120 as a daemon (i.e., executed as a background process by an operating system of host system 120 ) to configure one or more sections of persistent cache 122 .
- each of the plurality of the virtual machines 126 is a client 150 .
- Each client 150 executes one or more client applications 152 (e.g., a financial application, web application, educational application, etc.) that submit data access commands (e.g., data read and write commands) to the respective host system 120 .
- client applications 152 e.g., a financial application, web application, educational application, etc.
- data access commands e.g., data read and write commands
- an instance of I/O driver 124 executed by the respective host system 120 directs the handling of data access commands by virtual machines hosted by the respective host system 120 in accordance with configuration settings or parameters, including one or more of a caching mode settings or parameters, provided to the respective host system 120 by application server system 110 .
- the caching mode settings or parameters specify a respective section of a logical solid state drive to use as a cache (for example for caching read or write data and for accessing cached data) for a particular virtual machine element (e.g., a virtual drive), and also specifying a caching mode (e.g., write-through or write-back) to use in conjunction with that particular virtual machine element.
- Those storage policy settings are provided by application server system 110 to the respective host system 120 so as to configure the I/O driver 124 at the respective host system 120 to handle data caching in accordance with the portion of storage policy 112 applicable to that host system.
- the storage policy settings provided by server computing device 110 to the respective host system 120 include a cache mode setting, a caching priority setting, and/or a host pairing setting, each of which is described in more detail below.
- distributed system 100 includes secondary storage 162 connected to host systems 120 via communication network 130 .
- secondary storage 162 communicates with communication network 130 via storage area network 160 .
- storage area network 160 obtains and processes data access commands from host systems 120 and returns results to host systems 120 .
- secondary storage 162 stores data for one or more virtual machine elements (e.g., one or more virtual machines 126 and/or one or more virtual disks of a virtual machine 126 ) accessible to a client 150 on a respective host system 120 .
- Host systems 120 may use a respective persistent cache 122 for temporary storage and secondary storage 162 for long-term storage.
- data written to a respective persistent cache 122 a may also be written to secondary storage 162 without waiting for the write data to be evicted from the cache or, alternatively, waiting for the cached copy of the write data to be prepared for invalidation.
- data written to persistent cache 122 m is not written to secondary storage 162 until the write data is evicted from persistent cache 122 m , or, alternatively, the cached copy of the write data is prepared for invalidation.
- FIG. 2A is a block diagram of application server system 110 , which may be implemented using one or more servers.
- the application server system 110 is herein described as implemented using a single server or other computer.
- Application server system 110 generally includes one or more processing units 202 (sometimes called CPUs or processors or hardware processors), implemented in hardware, for executing modules, programs, and/or instructions stored in memory 206 (and thereby performing processing operations), memory 206 , one or more network or other communication interfaces 204 , and one or more communication buses 208 for interconnecting these components.
- the communication buses 208 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
- Memory 206 includes high speed random access memory and optionally includes non-volatile memory, such as one or more magnetic disk storage devices and/or flash memory devices. Memory 206 optionally includes mass storage that is remotely located from the CPU(s) 202 . In some embodiments, memory 206 stores the following programs, modules, and data structures, or a subset thereof:
- an operating system 208 that includes procedures for handling various basic system services and for performing hardware independent tasks
- a network communication module 210 that is used for connecting application server system 110 to other computers via the one or more communication network interfaces 204 (wired and/or wireless) and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and the like; and policies 112 , I/O driver(s) 114 , host communication module 116 , and host configuration module 118 , as described above with reference to FIG. 1 .
- Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- the above identified modules or programs i.e., sets of instructions
- memory 206 may store a subset of the modules and data structures identified above.
- memory 206 may store additional modules and data structures not described above.
- the programs, modules, and data structures stored in memory 206 , or the non-transitory computer readable storage medium of memory 206 provide instructions for implementing some of the methods described below.
- some or all of these modules may be implemented with specialized hardware circuits that subsume part or all of the module functionality.
- one or more respective policies 112 are created for a plurality of clusters 140 , and/or a subset of the plurality of clusters 140 , such as an individual cluster (e.g., cluster 140 a ).
- application server system 110 may store polices to be applied to one or more host systems 120 of cluster 140 a .
- default policy 220 specifies a set of policies to be applied to all virtual machines (or other virtual machine elements, such as virtual drives, sometimes called virtual data storage devices or virtual data storage drives) in cluster 140 a , except those virtual machines (or virtual machine elements) in cluster 140 a for which sub-policies have been specified.
- This example further specifies a pattern 222 , Pattern 1A 222 a , and a sub-policy 224 , Sub-Policy 1A 224 a , where pattern 222 a indicates or specifies the virtual machines or virtual machine elements in cluster 140 a to which sub-policy 224 a applies, and sub-policy 224 a specifies a set of policies (e.g., one or more particular policies) for the virtual machine elements whose identified match pattern 222 a.
- a set of policies e.g., one or more particular policies
- the policy for cluster 140 a includes one or more additional “pattern and sub-policy” pairs 222 / 224 , in addition to the 222 a / 224 a pair shown in FIG. 2A , where each “pattern and sub-policy” pair includes a respective pattern, specifying a set of virtual machine elements, and a sub-policy, specifying a set of policies (i.e., one or more policies) that apply to the virtual machine elements that match the specified pattern.
- a first respective pattern 222 includes identifiers for specific virtual machine elements.
- default policy 220 specifies a first caching mode, such as write-back mode, that is the default caching mode for virtual machines or virtual machine elements in cluster 140 a
- sub-policy 224 a specifies a second caching mode, such as write-through mode, for virtual machines or virtual machine elements in cluster 140 a whose identifiers match pattern 222 a .
- virtual machines or virtual machine elements using the write-through mode when writing data to cache, also write the same data to secondary storage (e.g., hard disk storage devices) without waiting for the write data to be evicted from the cache or, alternatively, waiting for the cached copy of the write data to be prepared for invalidation (which, in some systems is a preparatory operation performed prior to eviction from the cache, and which includes copying the write data from the cache to secondary storage, after which the write data in the cache is no longer “dirty” and is instead considered to be “clean”).
- secondary storage e.g., hard disk storage devices
- virtual machines or virtual machine elements using the write-back mode when writing data to cache, do not write the same data to secondary storage (e.g., hard disk storage devices) until the write data is evicted from the cache or, alternatively, the cached copy of the write data is prepared for invalidation (which, in some systems is a preparatory operation performed prior to eviction from the cache, and which includes copying the write data from the cache to secondary storage, after which the write data in the cache is no longer “dirty” and is instead considered to be “clean”).
- secondary storage e.g., hard disk storage devices
- application server system 110 configures virtual machines or virtual machine elements in cluster 140 a to use the default caching mode, specified by default policy 220 , unless their identifier matches the pattern 222 for a sub-policy 224 that specifies a different caching mode. For those virtual machines or virtual machine elements in cluster 140 a whose identifiers match the pattern 222 for a sub-policy 224 that specifies the different caching mode, application server system 110 configures those virtual machines or virtual machine elements in cluster 140 a to use the different caching mode.
- a sub-policy 224 in policies 112 takes precedence over (i.e., overrides) a default policy specified in policies 112 , with respect to the cluster for which the default policy and sub-policy are specified.
- a respective sub-policy 224 includes a sub-sub-policy and a corresponding pattern specified for the sub-sub-policy (not shown in FIG. 2A )
- the sub-sub-policy would take precedence over (i.e., override) both the default policy for the corresponding cluster and the sub-policy 224 for those virtual machines or virtual machine elements in the corresponding cluster whose identifiers match the pattern specified for the sub-sub-policy.
- application server system 110 stores multiple sets of policies, where each set of policies corresponds to a different set of host systems 120 , such as a different cluster 140 of host systems, or a different subset of host systems in a cluster 140 .
- one or more respective policies 112 are created for a plurality of host systems 120 and/or a subset of the plurality of host systems 120 , such as an individual host system (e.g., host system 120 m ).
- FIG. 2B is a block diagram of a host system 120 m , which may be implemented using one or more servers.
- the host system 120 m is herein described as implemented using a single server or other computer.
- Host system 120 m generally includes one or more processing units 252 (sometimes called CPUs or processors or hardware processors), implemented in hardware, for executing modules, programs, and/or instructions stored in memory 256 (and thereby performing processing operations), memory 256 , one or more network or other communication interfaces 254 , and one or more communication buses 258 for interconnecting these components.
- the communication buses 258 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
- Memory 256 includes high speed random access memory and optionally includes non-volatile memory, such as one or more magnetic disk storage devices and/or flash memory devices. Memory 256 optionally includes mass storage (e.g., secondary storage 152 ) that is remotely located from the CPU(s) 202 . In some embodiments, secondary storage 162 communicates with host system 120 m via storage area network 160 and/or communication interface(s) 254 . In some embodiments, memory 256 stores the following programs, modules, and data structures, or a subset thereof:
- an operating system 259 that includes procedures for handling various basic system services, for performing hardware independent tasks, and for performing procedures defined by I/O Driver 124 m (such as procedures for applying a respective policy 112 to host system 120 m ); a network communication module 260 that is used for connecting host system 120 m to other computers via the one or more communication network interfaces 254 (wired and/or wireless) and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and the like; policy settings 262 that includes one or more policies 112 ( FIG.
- host 120 m e.g., as applied to host 120 m by I/O driver 124 m
- application(s) 262 for example, one or more applications executed by virtual machines 126 hosted by the host system 120 m
- persistent cache 122 m typically implemented as one more solid state drives, or as one or more logical solid state drives, which in turn typically include flash memory devices to store information
- I/O driver 124 m and virtual machines 126 , as described above with reference to FIG. 1 .
- Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- the above identified modules or programs i.e., sets of instructions
- memory 256 may store a subset of the modules and data structures identified above.
- memory 256 may store additional modules and data structures not described above.
- the programs, modules, and data structures stored in memory 256 , or the non-transitory computer readable storage medium of memory 256 provide instructions for implementing some of the methods described below.
- some or all of these modules may be implemented with specialized hardware circuits that subsume part or all of the module functionality.
- FIGS. 3A-3B illustrate a flowchart representation of a method of configuring a plurality of memory caches, in accordance with some embodiments.
- a method 300 is performed by a server computing device, e.g., application server system 110 .
- the method 300 is governed by instructions that are stored in a non-transitory computer-readable storage medium (e.g., memory 206 ) and that are executed by one or more processors (e.g., hardware processors) of a device, such as the one or more processing units (e.g., CPU(s) 202 ) ( FIG. 2A ).
- processors e.g., hardware processors
- FIG. 2A the following describes method 300 as performed by a server computing device (e.g. application server system 110 ).
- the server computing device receives ( 302 ) or accesses a storage policy including a first caching mode for a first set of one or more virtual machine elements and a second caching mode for a second set of one or more virtual machine elements.
- a caching mode indicates how a virtual machine element (e.g., of host system 120 a ) caches information to one or more memory cache devices (e.g., persistent cache 122 a , such as an LSSD).
- the one or more virtual machine elements of the first set are different from the one or more virtual machine elements of the second set.
- a virtual machine element is a virtual machine 126 or a virtual disk of a virtual machine 126 .
- a virtual disk is a virtual storage device that is mapped to one or more physical storage devices (e.g., storage is allocated to a virtual disk from a main memory for host system 120 , such as a hard drive of host system 120 and/or a hard drive of a computing system remote from and communicatively coupled to host system 120 .)
- a virtual machine includes multiple virtual disks.
- a set of virtual elements is a set of one or more virtual machines and/or virtual disks of a respective host system 120 .
- a set of virtual elements is a set of one or more virtual machines and/or virtual disks of a respective cluster 140 .
- At least one of the first caching mode or the second caching mode is a write-back mode ( 304 ).
- a caching mode is a write-back mode
- write data is written to cache memory (e.g., persistent cache 122 a ) and completion is confirmed to the host system (e.g., host system 120 a ). Because the only copy of the written data is in the cache (rather than main memory for host system 120 ), write-back mode potentially reduces latency and increases I/O (input/output) throughput.
- At least one of the first caching mode or the second caching mode is a write-through mode ( 306 ).
- a caching mode is a write-through mode
- write data is written to cache memory (e.g., persistent cache 122 a ) and through to main memory for host system 120 before completion is confirmed to the host system (e.g., host system 120 a ).
- latency for write-through mode is greater than latency for write-back mode because of the time consumed by writing data through to main memory.
- Read I/O performance is improved in write-through mode when reading occurs from cache memory (i.e., when read requests or read operations are satisfied using data already present (i.e., stored or cached) in cache memory).
- the storage policy includes ( 308 ) a first section size for the first set of one or more virtual machine elements and a second section size for the second set of one or more virtual machine elements.
- a section size is, for example, an amount of storage space allocated on a LSSD. Section sizes specified by the storage policy are sometimes herein called caching priority settings.
- the virtual machine element is a virtual machine
- the amount of storage space allocated to the virtual machine element for caching data corresponds to a caching priority for the virtual machine.
- the virtual machine element is a virtual drive
- the amount of storage space allocated to the virtual machine element for caching data corresponds to a caching priority for a virtual machine that uses the virtual drive to store or access data.
- At least one of the first section size and the second section size is an indication ( 310 ) of a number of sections of a LSSD.
- a section is, for example, a portion of an LSSD that is defined according to a default storage size (e.g., 16 gigabytes) or that is defined according to a user-indicated storage size.
- a number of sections is, for example, a default number of sections or a user-defined number of sections.
- a section of an LSSD is shared by multiple virtual machine elements (e.g., multiple virtual machines and/or multiple virtual disks of a virtual machine).
- a policy for a first set of one or more virtual machine elements indicates that a section size of persistent cache 122 (e.g., including persistent cache of one or more host systems 120 ) for each virtual machine is two 16-gigabyte sections of persistent cache 122 and a policy for a second set of one or more virtual machine elements (e.g., all virtual machines of cluster 140 a ) indicates that a section size of persistent cache for each virtual machine is four sections with a minimum section size of 20 gigabytes.
- a default number of sections is two sections for each virtual machine with a minimum 16 gigabyte section size, where one of the two sections is used for a write-through caching mode and the other of the two sections is used for a write-back caching mode.
- At least one of the first section size or the second section size is an indication ( 312 ) of a proportion of storage size for an LSSD.
- a system administrator can specify a section size for a set of one or more virtual machine elements by specifying a proportion (e.g., a percentage) of total or partial LSSD size, e.g., of an LSSD of a respective host system 120 that hosts the one or more virtual machine elements.
- a proportion of a storage size for an LSSD is, e.g., a percentage of an LSSD, an amount of available space on an LSSD minus a predefined amount of storage space (e.g., a specific number of megabytes), etc.
- a section size is an indication of a proportion of a virtual flash file system (VFFS) size.
- VFFS size may be the amount of memory that is usable for caching by a respective host system 120 on an LSSD.
- a VFFS size may indicate a storage capacity remaining on an LSSD when a portion of the LSSD is occupied by non-cache system data.
- a proportion of storage size for an LSSD is indicated as a percentage of available VFFS, an amount of available space on a VFFS minus a predefined amount of storage (e.g., a specific number of megabytes), etc.
- a received storage policy specifies a first caching mode for a first portion of a memory cache (e.g., persistent cache 122 a , such as an LSSD) for a first set of one or more virtual machine elements and a second caching mode for a second portion of the memory cache for a second set of one or more virtual machine elements.
- a memory cache e.g., persistent cache 122 a , such as an LSSD
- the storage policy specifies ( 314 ) the first caching mode as a default policy for virtual machine elements hosted by a first cluster of host computing devices, and further specifies a different caching mode than the first caching mode for a specified set of virtual machine elements hosted by the first cluster of host computing devices.
- a storage policy specifies that write-back mode is a default mode (i.e., a default policy) for virtual machines 126 of cluster 140 c and additionally specifies that write-through mode is to be used for virtual machine 126 v of cluster 140 a.
- the storage policy specifies one or more virtual machine elements that are not be configured automatically in accordance with the storage policy (e.g., that a system administrator will configure manually).
- the server computing device determines ( 316 ) that a virtual machine element, hosted by a first host computing device (e.g., host system 120 a ), requires configuration.
- a host system 120 may be a stateless system that does not store configuration information when the host system 120 goes off-line (e.g., due to a power cycle, power outage, system failure, etc.).
- a host system 120 that is a stateless system requires configuration when the host-system goes off-line and returns to being on-line.
- determination that a virtual machine element requires configuration occurs when host systems 120 are initially configured.
- a computing device e.g., application server system 110
- a computing device performs periodic and/or user-initiated polling to discover new unconfigured host devices that have come on-line (e.g., as a component of a respective cluster 140 ), previously configured host devices that were previously off-line and have returned to being on-line, etc.
- host devices or at least some host devices in the distributed system 100 , FIG.
- the computing device e.g., application server system 110
- the computing device e.g., application server system 110
- the host devices come online, for example, as part of a boot sequence or other automatic sequence, thereby causing the computing device to discover that those host devices require configuration.
- the determination that a virtual machine element requires configuration occurs when a host system 120 hosting the virtual machine element is discovered.
- the server computing device determines ( 318 ) whether the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements. For example, the server computing device evaluates the policy to determine how the policy defines the first set of one or more virtual machine elements and the second set of one or more virtual machine elements to determine to which set the virtual machine element belongs.
- a policy indicates that a write-through mode is to be used for virtual machine elements of cluster 140 a and a write-through mode is to be used for virtual machine elements of cluster 140 c .
- server computing device determines that a virtual element hosted by host system 120 m is a member of a set of one or more virtual machine elements of cluster 140 c (e.g., by comparing identifying information of the virtual machine element and/or identifying information of host system 120 m with a naming convention, an IP address or IP address pattern, a MAC address and/or other information used to identify sets of one or more virtual machine elements in the policy).
- determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements includes determining ( 320 ) that identifying information for a virtual machine element matches first information for the first set of one or more virtual machine elements or matches second information for the second set of one or more virtual machine elements.
- a policy may include a list of virtual machine names or indicate a pattern used for virtual machine names.
- identifying information for virtual machines of cluster 140 a includes the text “CLUSTER_140A” and identifying information for virtual machines of cluster 140 c includes the text “CLUSTER_140C.”
- the server computing device determines whether the names of the virtual machines 126 of host system 120 m include the text “CLUSTER_140A” or “CLUSTER_140C,” and, if the names include the text “CLUSTER_140C,” the virtual machines 126 of host system 120 m are determined to belong to a set of one or more virtual machine elements to which a policy for cluster 140 a is to be applied.
- identifying information for a host system 120 m includes the text “CLUSTER_140C,” a virtual machine element of host system 120 m is determined to belong to a set of one or more virtual machine elements to which a policy for cluster 140 a is to be applied.
- identifying information for virtual machines of a respective cluster 140 includes an IP address matching a pattern or falling within a range of IP addresses specified by the policy for the respective cluster 140 .
- determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements or the second set of one or more virtual machine elements includes determining that size information for a virtual machine element matches first information for the first set of one or more virtual machine elements or matches second information for the second set of one or more virtual machine elements.
- Size information may include, e.g., a particular size, a minimum size, a maximum size, a percentage of available LSSD capacity, etc.
- size information for first set of virtual drives in cluster 140 c indicates a minimum size of 256 gigabytes (GB) and a maximum size of 376 GB.
- the server computing device In response to determining that the virtual machine element is a virtual machine element of the first set of one or more virtual machine elements, the server computing device applies ( 322 ) the first caching mode to a section of an LSSD associated with the virtual machine element, thereby configuring the caching mode of the virtual machine element. In response to determining that the virtual machine element is a virtual machine element of the second set of one or more virtual machine elements, the server computing device applies ( 324 ) the second caching mode to the section of the LSSD associated with the virtual machine element, thereby configuring the caching mode of the virtual machine element.
- the server computing device e.g., application server system 110
- the server computing device is distinct from the first host computing device (e.g., host system 120 m ) that hosts the virtual machine element (e.g., virtual machine 126 a ).
- the server computing device establishes ( 326 ) a host pair between the first host computing device (e.g., host system 120 m ), which hosts the virtual machine element (e.g., virtual machine 126 a ), and a second host computing device (e.g., host system 120 a ) that is different from the first host computing device.
- the first host computing device e.g., host system 120 m
- hosts the virtual machine element e.g., virtual machine 126 a
- a second host computing device e.g., host system 120 a
- data cached for the virtual machine element by the first host computing device is also cached by the second host computing device, thereby replicating or mirroring the data cached for the virtual machine element by the first host computing device at the second host computing device.
- Replicating at a second host computing device data cached for the virtual machine element by the first host computing device is a technique used to maintain data integrity.
- pairing is performed automatically, e.g., by server computing device 110 , in accordance with default paring techniques (e.g., a second host computing device is paired with a first host computing device when an IP address for the second host computing device is following and adjacent to the IP address for the first host computing device).
- the storage policy received or accessed by server computing device 110 includes one or more “manually” selected pairs of host systems 120 , as described next. For example, such techniques are used when devices of host pairs must belong to different racks, when devices of host pairs have different solid state drive types, etc.
- a storage policy received or accessed by server computing device 110 e.g., a policy generated by a system administrator
- the storage policy includes information identifying “manually selected” pairs of hosts, where the “manual” aspect of the pairing is from the perspective of the person adding host pairing information or policies to a set of storage policies to be used by server computing device 110 .
- the host pairs may be specified according to a user-defined naming convention (e.g., where hosts having identifiers that match a specified first pattern are paired with hosts having identifiers that match a specified second pattern).
- server computing device 110 so as to configure host computing systems in accordance with the host pairing aspect of the received or accessed policy, server computing device 110 provides identifying information for the first host computing system of a host pair to the second host computing system of the host pair and/or provides identifying information for the second host computing system of the host pair to the first host computing system of the host pair.
- the server computing device stores ( 328 ) identifying information for the first host computing device in association with identifying information for the second host computing device.
- the server computing device may store, locally or on a remote device, a table (e.g., a database table) that maps, for one or more host pairs, a first host of a host pair to a second host of a host pair.
- the table may include identifying information (e.g., names, IP addresses, MAC addresses, or the like) of the hosts in each host pair.
- the mapping may be indicated by a table including a record with identifying information for a first host and a second host.
- the aforementioned identifying information is included in the storage policy received or accessed by the server computing device.
- a default host pairing policy or pattern may be specified in a default policy for a cluster or other grouping of one or more host computer devices, and a different host pairing policy or pattern may be specified in a sub-policy for a specified or identified subset of the host computer devices in the cluster or other grouping of one or more host computer devices.
- first first
- second second
- first transistor first transistor
- first transistor second transistor
- first transistor first transistor
- second transistor second transistor
- the first transistor and the second transistor are both transistors, but they are not the same transistor.
- the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context.
- the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
a
application(s) 262, for example, one or more applications executed by virtual machines 126 hosted by the
I/
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/925,948 US9720734B2 (en) | 2015-06-30 | 2015-10-28 | Multi-host configuration for virtual machine caching |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562187087P | 2015-06-30 | 2015-06-30 | |
US14/925,948 US9720734B2 (en) | 2015-06-30 | 2015-10-28 | Multi-host configuration for virtual machine caching |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170004090A1 US20170004090A1 (en) | 2017-01-05 |
US9720734B2 true US9720734B2 (en) | 2017-08-01 |
Family
ID=57684159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/925,948 Active US9720734B2 (en) | 2015-06-30 | 2015-10-28 | Multi-host configuration for virtual machine caching |
Country Status (1)
Country | Link |
---|---|
US (1) | US9720734B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10585692B2 (en) | 2017-08-15 | 2020-03-10 | International Business Machines Corporation | Enhancing virtual machine performance using autonomics |
US10805421B2 (en) * | 2018-04-03 | 2020-10-13 | Citrix Systems, Inc. | Data caching for cloud services |
US10628317B1 (en) * | 2018-09-13 | 2020-04-21 | Parallels International Gmbh | System and method for caching data in a virtual storage environment based on the clustering of related data blocks |
US11573709B2 (en) * | 2020-01-07 | 2023-02-07 | International Business Machines Corporation | Maintaining data structures in a memory subsystem comprised of a plurality of memory devices |
US11620055B2 (en) | 2020-01-07 | 2023-04-04 | International Business Machines Corporation | Managing data structures in a plurality of memory devices that are indicated to demote after initialization of the data structures |
US11907543B2 (en) * | 2020-01-07 | 2024-02-20 | International Business Machines Corporation | Managing swappable data structures in a plurality of memory devices based on access counts of the data structures |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160314051A1 (en) * | 2015-04-22 | 2016-10-27 | PernixData, Inc. | Management and utilization of fault domains in distributed cache systems |
-
2015
- 2015-10-28 US US14/925,948 patent/US9720734B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160314051A1 (en) * | 2015-04-22 | 2016-10-27 | PernixData, Inc. | Management and utilization of fault domains in distributed cache systems |
Also Published As
Publication number | Publication date |
---|---|
US20170004090A1 (en) | 2017-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9720734B2 (en) | Multi-host configuration for virtual machine caching | |
US9648081B2 (en) | Network-attached memory | |
CN107949842B (en) | Virtual file system supporting multi-tier storage | |
US10831399B2 (en) | Method and system for enabling agentless backup and restore operations on a container orchestration platform | |
US10769024B2 (en) | Incremental transfer with unused data block reclamation | |
US10452279B1 (en) | Architecture for flash storage server | |
TWI439871B (en) | Maintaining storage area network (''san") access rights during migration of operating systems | |
US9760314B2 (en) | Methods for sharing NVM SSD across a cluster group and devices thereof | |
US20150236974A1 (en) | Computer system and load balancing method | |
US9525729B2 (en) | Remote monitoring pool management | |
US10176098B2 (en) | Method and apparatus for data cache in converged system | |
US10216423B1 (en) | Streams across multiple controllers to improve solid state drive performance | |
WO2018090606A1 (en) | Data storage method and device | |
US11755241B2 (en) | Storage system and method for operating storage system based on buffer utilization | |
US20190121709A1 (en) | Distributed extent based replication | |
US11474880B2 (en) | Network state synchronization for workload migrations in edge devices | |
US10176103B1 (en) | Systems, devices and methods using a solid state device as a caching medium with a cache replacement algorithm | |
US20170141958A1 (en) | Dedicated endpoints for network-accessible services | |
US11194746B2 (en) | Exchanging drive information | |
US7725654B2 (en) | Affecting a caching algorithm used by a cache of storage system | |
US20160217098A1 (en) | Fibre Channel Hardware Card Port Assignment and Management Method for Port Names | |
US11036404B2 (en) | Devices, systems, and methods for reconfiguring storage devices with applications | |
US10579277B1 (en) | Non-disruptive insertion of virtualized storage appliance | |
US20210019276A1 (en) | Link selection protocol in a replication setup | |
US9432476B1 (en) | Proxy data storage system monitoring aggregator for a geographically-distributed environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SANDISK TECHNOLOGIES INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KARIPPARA, JAIDIL;SHATS, SERGE;SEMA, ATOKA VIKUTO;REEL/FRAME:037249/0315 Effective date: 20151026 |
|
AS | Assignment |
Owner name: SANDISK TECHNOLOGIES LLC, TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:SANDISK TECHNOLOGIES INC;REEL/FRAME:038812/0954 Effective date: 20160516 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |