US8868884B2 - Method and apparatus for servicing read and write requests using a cache replacement catalog - Google Patents

Method and apparatus for servicing read and write requests using a cache replacement catalog Download PDF

Info

Publication number
US8868884B2
US8868884B2 US14/262,366 US201414262366A US8868884B2 US 8868884 B2 US8868884 B2 US 8868884B2 US 201414262366 A US201414262366 A US 201414262366A US 8868884 B2 US8868884 B2 US 8868884B2
Authority
US
United States
Prior art keywords
plurality
catalog
assigned
value
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/262,366
Other versions
US20140237182A1 (en
Inventor
Chetan Venkatesh
Sagar Shyam Dixit
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Atlantis Computing Holdings LLC
Original Assignee
Atlantis Computing Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201161505524P priority Critical
Priority to US13/269,503 priority patent/US8732401B2/en
Application filed by Atlantis Computing Inc filed Critical Atlantis Computing Inc
Priority to US14/262,366 priority patent/US8868884B2/en
Assigned to Atlantis Computing, Inc. reassignment Atlantis Computing, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIXIT, SAGAR SHYAM, VENKATESH, CHETAN
Publication of US20140237182A1 publication Critical patent/US20140237182A1/en
Assigned to Atlantis Computing, Inc. reassignment Atlantis Computing, Inc. CHANGE OF ADDRESS Assignors: Atlantis Computing, Inc.
Application granted granted Critical
Publication of US8868884B2 publication Critical patent/US8868884B2/en
Assigned to ATLANTIS COMPUTING HOLDINGS, LLC reassignment ATLANTIS COMPUTING HOLDINGS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Atlantis Computing, Inc., INSOLVENCY SERVICES GROUP, INC.
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0808Multiuser, multiprocessor or multiprocessing cache systems with cache invalidating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/128Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors

Abstract

Methods and systems to intelligently cache content in a virtualization environment using virtualization software such as VMWare ESX or Citrix XenServer or Microsoft HyperV or Redhat KVM or their variants are disclosed. Storage IO operations (reads from and writes to disk) are analyzed (or characterized) for their overall value and pinned to cache if their value exceeds a certain defined threshold based on criteria specific to the New Technology File System (NTFS) file-system. Analysis/characterization of NTFS file systems for intelligent dynamic caching include analyzing storage block data associated with a Virtual Machine of interest in accordance with a pre-determined data model to determine the value of the block under analysis for long term or short term caching. Integer values assigned to different types of NTFS objects in a white list data structure called a catalog that can be used to analyze the storage block data.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a divisional of U.S. patent application Ser. No. 13/269,503, filed on Oct. 7, 2011, which claims the benefit of U.S. Provisional Patent Application No. 61/505,524, filed Jul. 7, 2011, and entitled “De-Duplication Of Virtual Machine Files In A Virtualized Desktop Environment,” which are herein incorporated by reference in their entirety and for all purposes.

FIELD OF THE INVENTION

The invention relates generally to storage data caching of Virtual Machine Images/disks that execute on a virtual machine hypervisor (virtualization layer). More specifically the invention relates to a way of determining which storage blocks have a higher value to be cached or retained in a high speed cache for a longer period of time and which storage blocks should be discarded or cached/retained for shorter intervals of time because of those blocks have lower value.

BACKGROUND OF THE INVENTION

Conventional solutions for virtualization technology provide numerous capabilities to efficiently deliver applications and desktops by packaging them as virtual machines. Virtualization is a technology that provides a software based abstraction to a physical hardware based computer. The abstraction layer decouples the physical hardware components—CPU, memory, and disk from the Operating System (OS) and thus allows many instances of an OS to be run side-by-side as virtual machines (VMs) in complete isolation to one another. The OS within each virtual machine sees a complete, consistent and normalized set of hardware regardless of the actual physical hardware underneath the software based abstraction. Virtual machines are encapsulated as files (also referred to as images) making it possible to save, replay, edit, copy, cut, and paste the virtual machine like any other file on a file-system. This ability is fundamental to enabling better manageability and more flexible and quick administration compared to physical virtual machines.

These benefits notwithstanding, conventional VMs suffer from several performance related weaknesses that arise out of the way the VM interfaces with the storage sub-system(s) that stores the VM images or files. Those performance weaknesses include but are not limited to the following examples.

First, every read operation or write operation performed by every single VM (and there can be hundreds if not thousands of VMs performing such operations concurrently) is serviced in a queue by the storage system. This creates a single point of contention that results in below-par performance.

Second, the storage system usually blocks all write operations until a read request is fulfilled. Therefore, the preference given to read IO's results in data that flows in fits and bursts as the storage system comes under load. In more advanced storage architectures, storage pools are created to isolate applications from being blocked by each other but the effect is still experienced within the pool.

Third, there are numerous latencies that develop as input/output (IO) is queued at various points in an IO stack from a VM hypervisor to the storage system. Examples of latencies include but are not limited to: (a) when an application residing inside a Guest OS issues an IO, that IO gets queued to the Guest OS's Virtual Adapter driver; (b) the Virtual Adapter driver then passes the IO to a LSI Logic/BusLogic emulator; (c) the LSI Logic/BusLogic emulator queues the IO to a VMkernel's Virtual SCSI layer, and depending on the configuration, IOs are passed directly to the SCSI layer or are passed thru a Virtual Machine File System (VMFS) file system before the IO gets to the SCSI layer; (d) regardless of the path followed in (c), ultimately all IOs will end up at the SCSI layer; and (e) IOs are then sent to a Host Bus Adapter driver queue. From then on, IOs hit a disk array write cache and finally a back-end disk. Each example in (a)-(e) above introduces various degrees of latency.

Fourth, Least Recently Used (LRU)/Least Frequently Used (LFU)/Adaptive Replacement (ARC) cache replacement techniques all ultimately rely on building a frequency histogram of block storage access to determine a value for keeping or replacing a block from cache memory. Therefore, storage systems that rely on these cache management techniques will not be effective when servicing virtualization workloads especially Desktop VMs as the working set is too diverse for these techniques to manage cache consolidation and not cause cache fragmentation.

Fifth, in a virtualization environment, there typically exist multiple hierarchical caches in different subsystems—i.e. the Guest OS, the VM Hypervisor and a Storage Area Network (SAN)/Network Attached Storage (NAS) storage layer. As all the caches are independent of each other and unaware of each other, each cache implements the same cache replacement policies (e.g., algorithms) and thus end up all caching the same data within each independent cache. This results in an inefficient usage of the cache as cache capacity is lost to storing the same block multiple times. This is referred to as the cache inclusiveness problem and cannot be overcome without the use of external mechanisms to co-ordinate the contents of the multiple hierarchical caches in different subsystems.

Finally, SAN/NAS based storage systems that are under load ultimately will always be at a disadvantage to service virtualization workloads as they will need to service every IO operation from disk as cache will be overwhelmed and fragment in the face of a large working set and because of diminished capacity within the caches due to the aforementioned cache inclusiveness problem.

The above performance weakness examples are a non-exhaustive list and there are other performance weaknesses in conventional virtualization technology.

There are continuing efforts to improve processes, cache techniques, software, data structures, hardware, and systems for virtualization technology.

BRIEF DESCRIPTION OF THE DRAWINGS

Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings:

FIG. 1A depicts one example of a white list data structure for a catalog for NTFS object types, according to various embodiments;

FIG. 1B depicts another example of a white list data structure for a catalog for Windows/NTFS object types, according to various embodiments;

FIG. 2 depicts one example of a process for preparing a catalog, according to various embodiments;

FIG. 3 depicts one example of a process for catalog activation on a newly initialized cache, according to various embodiments;

FIG. 4 depicts one example of a process for read request cache population, according to various embodiments;

FIG. 5 depicts one example of a process for write request cache population, according to various embodiments;

FIG. 6 depicts an example architecture for a cache including a plurality of slots and each slot including a plurality of sets, according to various embodiments;

FIGS. 7A-7B depict one example of a process for evaluating assigned values in slot metadata as a basis for slot eviction from a cache, according to various embodiments;

FIG. 8 depicts a block diagram of an exemplary computer system suitable for real time execution of intelligent content aware caching of virtual machine data by relevance to the NTFS file system, according to various embodiments; and

FIG. 9 depicts an exemplary system for real time of intelligent content aware caching of virtual machine data by relevance to the NTFS file system, according to various embodiments.

Although the above-described drawings depict various examples of the invention, the invention is not limited by the depicted examples. It is to be understood that, in the drawings, like reference numerals designate like structural elements. Also, it is understood that the drawings are not necessarily to scale.

DETAILED DESCRIPTION

Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a user interface, or a series of program instructions on a computer readable medium such as a computer readable storage medium or a computer network where the program instructions are sent over optical, electronic, or wireless communication links. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.

A detailed description of one or more examples is provided below along with accompanying figures. The detailed description is provided in connection with such examples, but is not limited to any particular example. The scope is limited only by the claims and numerous alternatives, modifications, and equivalents are encompassed. Numerous specific details are set forth in the following description in order to provide a thorough understanding. These details are provided for the purpose of example and the described techniques may be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields related to the examples has not been described in detail to avoid unnecessarily obscuring the description.

In some examples, the described techniques may be implemented as a computer program or application (“application”) or as a plug-in, module, or sub-component of another application. The described techniques may be implemented as software, hardware, firmware, circuitry, or a combination thereof. If implemented as software, then the described techniques may be implemented using various types of programming, development, scripting, or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques, including ASP, ASP.net, .Net framework, Ruby, Ruby on Rails, C, Objective C, C++, C#, Adobe® Integrated Runtime™ (Adobe® AIR™), ActionScript™, Flex™, Lingo™, Java™, Javascript™, Ajax, Perl, COBOL, Fortran, ADA, XML, MXML, HTML, DHTML, XHTML, HTTP, XMPP, PHP, and others. Software and/or firmware implementations may be embodied in a non-transitory computer readable medium configured for execution by a general purpose computing system or the like. The described techniques may be varied and are not limited to the examples or descriptions provided.

The present invention overcomes all of the limitations of the aforementioned conventional solutions for virtualization technology by providing a content aware caching implementation having one or more of the following benefits.

Every block IO request is characterized or analyzed to understand its importance relative to other components within the Virtual Machines file system. Characterization of block IO requests allows the cache to maintain a higher quality of content in the face of numerous IO requests from virtual machines that would fragment a non content aware cache.

The characterization allows a score to be assigned to the requested block and use of the assigned score to evaluate the importance of the block in the event of a cache slot/replacement scenario. Consequently, cached blocks that are of higher importance in the NTFS file system are more resistant to replacement and take precedence in the cache.

The cache inclusivity problem is also solved, wherein when multiple hierarchical caches are working independently (as would be the case in a typical virtualization scenario) all the different caches (though hierarchical) end up including more or less the same blocks and thus are not affective. Due to the content awareness of the application, the cache is able to store a more diverse set of blocks than typical cache replacement mechanisms such as Least Recently Used (LRU) or Least Frequently Used (LFU).

The cache is near line to the VMs thus allowing most IO requests to be serviced by the caching application rather than the SAN/NAS system and thus offloading the SAN/NAS from contention allowing for performance and response time benefits.

Embodiments of the present invention pertain to the methods and systems to increase cache efficacy and to an alternative caching scheme that fronts the SAN/NAS storage subsystem. In one embodiment, a data reduction technique such as data de-duplication to only store unique data within the cache. This embodiment relates to the write IO generated by the virtual machines to the storage system. De-duplication techniques are described in U.S. Patent Application No. 61/505,524 filed on Jul. 7, 2011, and titled “De-Duplication Of Virtual Machine Files In A Virtualized Desktop Environment”, already incorporated herein by reference.

In another embodiment, heuristics are used to characterize and determine the value of seeking a block from storage and how long to retain the block and/or when to evict/replace the block from the cache in favor of a block with a higher value. This embodiment relates to the read IO generated by the desktop Virtual Machines as they seek data from storage during discrete phases of their life-cycle including but not limited to boot, user logon, application launch, anti-virus scan etc.

A third embodiment of the invention is as an inline virtual storage appliance that services its requests while running adjacent to the Desktop Virtual Machine workloads that its servicing and its ability to service IO requests more effectively from the cache (better locality of reference) closer to the demand and eliminating the need for SAN/NAS to service those requests.

The various embodiments of the present invention use a white list data structure called a catalog. The catalog contains a list of hashes of known NTFS objects. Each hash entry in the catalog is of a size corresponding to the NTFS cluster size. For example, each block can have a size of 4 KB. The contents of the catalog are pre-determined and contain the hashes of the most frequently used blocks common to Virtual Machines in a Virtualized Desktop environment running the Windows operating system. Corresponding to each hash entry in the catalog is a value field that contains an integer value. For example, the integer values can be between 1 and 3, where 1 is the lowest value and 3 is the highest value. Actual ranges for integer values will be application dependent and other integer values may be implemented and the present invention is not limited to the integer values described herein.

FIG. 1A depicts an exemplary white list data structure for a catalog 100 where hash entries for types of NTFS objects are depicted in a right hand column of the data structure for catalog 100 and integer values assigned to blocks are depicted in a left hand column of the data structure for catalog 100. Values depicted in the left hand column are assigned to the various blocks in the catalog 100 based on the following criteria. A value of 1 marks a block in the right hand column that is never to be cached, meaning that the block is read from storage (e.g., disk) but is never populated in the cache. A value of 2 marks a block in the right hand column that is to be cached and replaced normally according to a default replacement algorithm. A value of 3 marks a block in the right hand column that is to be pinned to the cache and never replaced. Optionally, a value of 0 marks a block in the right hand column that like the value 1 is never to be cached. The above criteria is only one example and the present invention is not limited to the criteria described herein and other schemes may be used to assign values to blocks in the catalog and to determine what actions to take based on the assigned values.

The catalog 100 is stored in a file 101 on disk 103 or other data storage media and is a part of the caching application. Examples of storage media include but are not limited to a hard disk drive (HDD), a solid state disk drive (SSD), a RAID storage system, just to name a few. The file 101 can subsequently be read 105 from the disk 103 by an application running on a hypervisor (not shown). Components in the catalog 100 may be hashed using a weak hash, such as a CRC, for example. A CRC based hashing technique is economical from a CPU resource stand point and the CPU can execute the CRC hash quickly. Further, should two entries share a hash, there would be no material side-effect other than one of the two entries being kept in cache while the other entry would have to be fetched from disk or other data storage media.

FIG. 1B depicts another example of an exemplary white list data structure for a catalog 150 where hash entries for types of Windows/NTFS objects. Catalog 150 is stored in a file 151 on disk 153 or other data storage media and is a part of the caching application. The file 151 can subsequently be read 155 from the disk 153 by an application running on a hypervisor (not shown). Values depicted in the left hand column can be assigned to the various blocks in the catalog 150 based on the criteria described above in reference to catalog 100 of FIG. 1A.

Catalog Preparation

The hash-table data structure for catalog 100 can be populated in the following order: (1) a Virtual Machine image with the windows operating system is created, either from scratch or an existing image is used; (2) a Virtual Machine image is loaded by the Application via means of a mount utility; (3) the Application enumerates the file contents of the operating system and program files directories on the root file-system and stores this enumeration (denoted as a directory enumeration result); (4) each file in the directory enumeration result is then read from disk or other data storage media, the file is read sequentially from beginning to end in segments determined by the NTFS cluster size (e.g., in 4 KB segments); (5) each segment upon read is hashed using a hash function (e.g., such as a CRC-32 function) and the generated hash is then stored as an entry in the catalog 100 along with its associated content value (see FIGS. 1A and 1B) such as set forth in the non-limiting examples of (a)-(c) below for Windows/NTFS Objects:

(a) HAL components, SAM Registry, Security, NTUser.dat from system32/64 directories are assigned a value of 1; (b) Boot Components, DEFAULT Registry, SYSTEM Registry, NTOSKRNL and related components, win32/64 DLLs, c:\windows\*, prog_files\microsoft, prog_files\office are assigned a value of 3; (c) All remaining content is assigned a value of 2. The file for catalog 100 is an intrinsic part of the Application and is stored along with the Application. Once all the files are processed, the file for catalog 100 is closed and saved to disk or other data storage media.

Turning now to FIG. 2, a process 200 depicts one example of catalog preparation. At a stage 202 a Virtual Machine (VM) image with an OS is created from an existing image or from scratch. At a stage 204 the VM image is loaded into a caching application (Application hereinafter) using a mount utility. At a stage 206 the Application enumerates file contents of the OS and program files directories on the root file system. At a stage 208 the Application stores the enumeration as a directory enumeration result. At a stage 210 each file in the directory enumeration result is sequentially read from beginning to end in segments that are determined by a NTFS cluster size (e.g., 4 KB). The directory enumeration result is sequentially read from a disk or other storage media. At a stage 212 each segment that was read is hashed using a hashing function such as CRC-32 or some other hashing function, such as a weak hashing function. Preferably, the hashing function is a weak hashing function or some other type of hashing function that is not compute time intensive so as to not create latency due to unnecessarily long compute times to implement the hashing function. At a stage 214 each hash entry and a value associated with the hash entry are stored in hash table data structure such as the catalog 100 of FIG. 1. At a stage 216 a decision branch make a query as to whether or not all of the files from the directory enumeration result have been read. If all of the files have not been read, then a “NO” branch is taken and the flow returns to the stage 210 where the sequential reading resumes. If all files have been read, then a “YES” branch is taken and at a stage 218 the file for catalog 100 is closed and saved to disk or other storage media.

Catalog Activation on a Newly Initialized Cache

When the Application is initialized, the file for catalog 100 is read into memory. Once the catalog 100 is populated into memory, the Application waits to service Block Read and Block Write requests and is activated as Block Read requests are serviced from disk or other storage media and the cache is populated. At the beginning when the cache is first initialized, the cache is empty and does not contain any data in its slots. The cache is populated by read and write activity through the cache. Every read or write populates one of the many slots of the cache with its payload. As the cache gets populated, the cache metadata is updated with the value (e.g., 100 or 150) of each slot as described below for populating the cache with Read Requests and Write Requests.

FIG. 3 depicts one example of a process flow 300 for catalog activation on a newly initialized cache. At a stage 302 a content aware caching application “Application” is initialized. At a stage 304, a catalog (e.g., 100 or 150) is read into memory (e.g., memory used by the Application) from disk or other storage media. At a stage 306, the Application is waiting to service block read and block write requests. At a stage 308 if a block read request has not been serviced, then a NO branch is taken and the flow returns to the stage 306 and the Application continues waiting to service block read and block write requests. However, if a block read request has been serviced, then a YES branch is taken and the Application is initialized at a stage 310. At a stage 312 a cache is populated with read/write requests through the cache. At a stage 314, each read/write request populates a slot in the cache with a payload and metadata. The metadata has been updated with an assigned value from the catalog (e.g., 100 or 150). At a stage 316, cache slot metadata is updated according to read IO protocols for read requests or according to write IO protocols for write requests.

Read Requests and Cache Population

Every first read results in a cache miss as the cache cannot service the block IO read and fetches it from disk or other storage media. As the read is serviced from disk or other storage media, the catalog value of the read IO request is computed as follows: (a) the content of the block is hashed using a hash function (e.g., CRC-32) and the resulting hash value is stored in memory; and (b) the hash value is compared against the catalog 100 and a catalog value is assigned to the read 10. If the hash value exists in the catalog 100, then the corresponding catalog value is assigned to the read IO and stored in the cache slot's metadata. If the catalog value of the hash is 1, then the value is not populated in the cache. If the hash value does not exist in the catalog, then the value of 0 (zero) is assigned to the read IO and stored in the cache metadata.

Referring now to FIG. 4, a process 400 depicts one example of read request cache population. At a stage 402 a hashing function (e.g., CRC-32) is used to hash the contents of a read IO block. At a stage 404 a hash value generated by the hashing function is stored in memory. At a stage 406 the hash value is compared against the catalog (e.g., 100 or 150). At a stage 408 a determination is made as to whether or not the hash value exists in the catalog. If not, a NO branch is taken and at a stage 412 the metadata for the cache slot is updated with an assigned value of “0” from the catalog. On the other hand, if the hash value exists in the catalog, then a YES branch is taken and at a stage 410 a determination is made as to whether or not the assigned value from the catalog is an assigned value of “1”. If the assigned value is “1”, then a YES branch is taken and at a stage 414 the cache slot metadata is not updated. If the assigned value is not “1”, then a NO branch is taken and at a stage 416 the cache slot metadata is updated with an assigned value from the catalog.

Write Requests and Cache Population

Every write request results in the cache being populated (or eviction and then population) if the cache is in write-back. If the cache is in write-through or a variant write around mode, then the cache is only populated in the subsequent reading of that IO request. In a write back cache when a write IO request is stored in the cache, the catalog value of the write IO request is computed as follows: (a) the content of the block is hashed using a hash function (e.g., CRC-32) and the resulting hash value is stored in memory; and (b) the hash value is compared against the catalog 100 and a catalog value is assigned to the write IO. If the hash value exists in the catalog 100, then the corresponding catalog value is assigned to the write IO and stored in the cache slot's metadata. If the catalog value of the hash is 1, then the value is not populated in the cache. If the hash value does not exist in the catalog 100, then the value of 0 (zero) is assigned to the write IO and stored in the cache metadata.

In FIG. 5, a process 500 depicts one example of write request cache population. At a stage 502 a hashing function is used to hash the contents of a write IO block. At a stage 504 a hash value generated by the hashing function is stored in memory. At a stage 506 the hash value is compared against the catalog (e.g., 100 or 150). At a stage 508 a determination is made as to whether or not the hash value exists in the catalog. If not, a NO branch is taken and at a stage 512 the metadata for the cache slot is updated with an assigned value of “0” from the catalog. On the other hand, if the hash value exists in the catalog, then a YES branch is taken and at a stage 510 a determination is made as to whether or not the assigned value from the catalog is an assigned value of “1”. If the assigned value is “1”, then a YES branch is taken and at a stage 514 the cache slot metadata is not updated. If the assigned value is not “1”, then a NO branch is taken and at a stage 516 the cache slot metadata is updated with an assigned value from the catalog.

Cache Replacement Using the Catalog Value

When the cache is full, the cache must choose which items to discard to make room for the new ones. In the context of the Application at hand, the cache uses the catalog value from catalog (e.g., 100 or 150) to determine if a slot can be evicted from a set in a cache as follows: (a) in a given set, all slots with value of 3 are never evicted. If the catalog value of the slot being examined for eviction has a value of 3, then the slot is left intact and the next slot in the series (or set) is evaluated; (b) in a given set, all slots with catalog value 0 are first evaluated for eviction. Among all slots with catalog value of 0, the slot with the oldest time stamp is evicted or replaced; (c) if there are no slots in the set with a catalog value of 0, then all slots of value 2 in the set are then evaluated and the slot with the oldest time slot is evicted; (d) if there are no slots with catalog value of 0 or 2, then all slots of catalog value 1 are evaluated and the slot with the oldest time slot is evicted; and (e) if there are no evictions available in the given set, then other sets in the cache and their associated slots are examined for eviction using the process of steps (a)-(d) above.

FIG. 6 depicts one example of an architecture for a cache 600 that includes a plurality of sets 601 with each set 601 including a plurality of slots 602. The Application can use the catalog values from the catalog to parse 603 the sets (in either direction) and to parse 605 the slots 602 within a set 601 (in either direction) to effectuate slot 602 eviction denoted as 611 in FIG. 6 using the above cache replace scheme to evict 611 one or more slots 602 from sets 601 of cache 600.

Reference is now made to FIGS. 7A-7B where a process 700 for cache slot replacement using catalog values is depicted. At a stage 702 assigned values from catalog (e.g., 100 or 150) are used to determine which slot or slots to evict from a set in a cache (e.g., cache 600). At stage 704 a slot from a give set in the cache is evaluated. At a stage 706, if the metadata for that slot has an assigned value of “3” from the catalog, then a YES branch is taken and at a stage 708 that slot is not evicted from the cache and at a stage 710 the next slot in the given set may be evaluated and the process can resume at the stage 706. However, if the assigned value is not a “3”, then a NO branch is taken and at a stage 712 all slots within the given set are evaluated to determine if they have an assigned value of “0”. At a stage 714, if slots having metadata with an assigned value of “0” are found, then a YES branch is taken and at a stage 716 the slot in the given set having the assigned value of “0” and having the oldest time stamp is evicted from the cache. On the other hand, if no slots with an assigned value of “0” are found, then a NO branch is taken and at a stage 718 all slots within the given set are evaluated to determine if they have an assigned value of “2”.

Turning now to FIG. 7B, process 700 continues at a stage 720, where if slots having the assigned value of “2” are found a YES branch is taken and at a stage 722 the slot having the assigned value of “2” and having the oldest time stamp is evicted from the cache. If no slots having the assigned value of “2” are found, then a NO branch is taken to a stage 724 where the set is evaluated to see if there are any slots having an assigned value of “0” or “2”. If slots having the assigned value of “0” or “2” are found, then a YES branch is taken and at a stage 726 the process 700 can go to another set in the cache and evaluate the slots in that set, for example, by returning to the stage 704 of FIG. 7A. If no slots having the assigned value of “0” or “2” are found, then a NO branch is taken to a stage 728 where all slots in the set having an assigned value of “1” are evaluated. At a stage 730, if slots having the assigned value of “1” are found, then a YES branch is taken and at a stage 732 the slot having the assigned value of “1” and having the oldest time stamp is evicted from the cache. At a stage 734 the process 700 can go to another set in the cache and evaluate the slots in that set, for example, by returning to the stage 704 of FIG. 7A. On the other hand, if no slots with the assigned value of “1” are found, then a NO branch is taken and at a stage 736 the process 700 can go to another set in the cache and evaluate the slots in that set, for example, by returning to the stage 704 of FIG. 7A.

FIG. 8 illustrates an exemplary computer system suitable for real time execution of intelligent content aware caching of virtual machine data by relevance to the NTFS file system. In some examples, computer system 800 may be used to implement computer programs, applications, methods, processes, or other software to perform the above-described techniques. Computer system 800 includes a bus 802 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 804, system memory 806 (e.g., RAM), storage device 808 (e.g., ROM, Flash Memory, SSD, etc.), disk drive 810 (e.g., magnetic or optical), communication interface 812 (e.g., modem or Ethernet card), display 814 (e.g., CRT or LCD), input device 816 (e.g., keyboard), and cursor control 818 (e.g., mouse or trackball).

According to some examples, computer system 800 performs specific operations by processor 804 executing one or more sequences of one or more instructions stored in system memory 806. Such instructions may be read into system memory 806 from another computer readable medium, such as static storage device 808 or disk drive 810. In some examples, disk drive 810 can be implemented using a SSD. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation.

The term “computer readable medium” refers to any tangible medium that participates in providing instructions to processor 804 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 810. Volatile media includes dynamic memory, such as system memory 806. Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 502 for transmitting a computer data signal.

In some examples, execution of the sequences of instructions may be performed by a single computer system 800. According to some examples, two or more computer systems 800 coupled by communication link 820 (e.g., LAN, PSTN, or wireless network) may perform the sequence of instructions in coordination with one another. Computer system 800 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 820 and communication interface 812. Received program code may be executed by processor 804 as it is received, and/or stored in disk drive 810, or other non-volatile storage for later execution. Single computer system 800 may be replicated, duplicated, or otherwise modified to service the needs of a real time intelligent content aware caching of virtual machine data by relevance to the NTFS file system in a virtualized desktop environment as described herein.

FIG. 9 depicts an exemplary system for real time intelligent content aware caching of virtual machine data by relevance to the NTFS file system in a virtualized desktop environment. Here, system 900 includes virtual machines (hereafter “VM”s) 902-0-902-n, a VM hypervisor 901, intelligent content aware caching application 911, optional DAS storage 913, primary storage 921-925, storage network 915, user 930, and network 935. The number, type, configuration, topology, connections, or other aspects of system 900 may be varied and are not limited to the examples shown and described. In some examples, 902-0-902-n may be instances of an operating system running on various types of hardware, software, circuitry, or a combination thereof (e.g., x86 servers) that are managed by VM hypervisor 901. As shown, application 911 may be used to implement intelligent content aware caching using a cache memory (e.g., cache 600) into which data may be read or written before being asynchronously (or, in some examples, synchronously) written back to primary storage 921-925. The cache memory may be local to VM hypervisor 901, DAS 913, or elsewhere in system 900. Further, primary storage 921-925 may be implemented as any type of data storage facility such as those described herein (e.g., SAN, NAS, RAID, DAS, disk drives, and others, without limitation). Some or all of the components of system 900 depicted in FIG. 9 can be implemented using at least a portion of the system 800 depicted in FIG. 8.

The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. In fact, this description should not be read to limit any feature or aspect of the present invention to any embodiment; rather features and aspects of one embodiment can readily be interchanged with other embodiments. Notably, not every benefit described herein need be realized by each embodiment of the present invention; rather any specific embodiment can provide one or more of the advantages discussed above. In the claims, elements and/or operations do not imply any particular order of operation, unless explicitly stated in the claims. It is intended that the following claims and their equivalents define the scope of the invention. Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described inventive techniques are not limited to the details provided. There are many alternative ways of implementing the above-described invention techniques. The disclosed examples are illustrative and not restrictive.

Claims (12)

What is claimed is:
1. A method, comprising:
reading into memory a catalog file including a hash table having a plurality of entries for new technology file system (NTFS) objects and a plurality of associated values, each associated value comprises an integer, and each entry is assigned only one of the plurality of associated values;
waiting to service block read requests and block write requests;
initializing an intelligent content aware caching application running on a VM hypervisor only after a block read request has been received;
populating a slot in a cache with a payload and metadata from each block read request or from each block write request that is received after the caching application has been initialized;
updating the metadata in each slot with one of the plurality of assigned values from the catalog according to a read input-output (IO) protocol for block read requests or according to a write IO protocol for block write requests, wherein the caching application updates each slot using the plurality of entries for NTFS objects and the plurality of associated values read into memory from the catalog.
2. The method of claim 1, wherein the read IO protocol comprises
hashing contents of a read IO block using a hashing function to generate a hash value;
storing the hash value in memory;
comparing the hash value against the plurality of entries for NTFS objects in the catalog;
determining if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog;
updating metadata in the slot for the read IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values;
foregoing an update of metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one; and
updating metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
3. The method of claim 1, wherein the write IO protocol comprises
hashing contents of a write IO block using a hashing function to generate a hash value;
storing the hash value in memory;
comparing the hash value against the plurality of entries for NTFS objects in the catalog;
determining if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog;
updating metadata in the slot for the write IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values;
foregoing an update of metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one; and
updating metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
4. The method of claim 1, wherein each associated value comprises a score indicative of a relative importance of corresponding NTFS objects.
5. A non-transitory computer readable storage medium including instructions that, when executed on a computer system cause the computer system to perform a method comprising:
reading into memory a catalog file including a hash table having a plurality of entries for new technology file system (NTFS) objects and a plurality of associated values, each associated value comprises an integer, and each entry is assigned only one of the plurality of associated values;
waiting to service block read requests and block write requests;
initializing an intelligent content aware caching application running on a VM hypervisor only after a block read request has been received;
populating a slot in a cache with a payload and metadata from each block read request or from each block write request that is received after the caching application has been initialized;
updating the metadata in each slot with one of the plurality of assigned values from the catalog according to a read input-output (IO protocol for block read requests or according to a write IO protocol for block write requests, wherein the caching application updates each slot using the plurality of entries for NTFS objects and the plurality of associated values read into memory from the catalog.
6. The non-transitory computer readable storage medium of claim 5 further comprising read IO protocol instructions that, when executed on the computer system cause the computer system to perform the method comprising:
hashing contents of a read IO block using a hashing function to generate a hash value;
storing the hash value in memory;
comparing the hash value against the plurality of entries for NTFS objects in the catalog;
determining if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog;
updating metadata in the slot for the read IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values;
foregoing an update of metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one; and
updating metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
7. The non-transitory computer readable storage medium of claim 5 further comprising write IO protocol instructions that, when executed on the computer system cause the computer system to perform the method comprising:
hashing contents of a write IO block using a hashing function to generate a hash value;
storing the hash value in memory;
comparing the hash value against the plurality of entries for NTFS objects in the catalog;
determining if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog;
updating metadata in the slot for the write IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values;
foregoing an update of metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one; and
updating metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
8. The non-transitory computer readable storage medium of claim 5, wherein each associated value comprises a score indicative of a relative importance of corresponding NTFS objects.
9. A system, comprising:
a memory; and
a processor coupled with the memory configured to
read into the memory a catalog file including a hash table having a plurality of entries for new technology file system (NTFS) objects and a plurality of associated values, each associated value comprises an integer, and each entry is assigned only one of the plurality of associated values,
wait to service block read requests and block write requests,
initialize an intelligent content aware caching application running on a VM hypervisor only after a block read request has been received,
populate a slot in a cache with a payload and metadata from each block read request or from each block write request that is received after the caching application has been initialized,
update the metadata in each slot with one of the plurality of assigned values from the catalog according to a read input-output (IO) protocol for block read requests or according to a write input-output (IO) protocol for block write requests, wherein the caching application updates each slot using the plurality of entries for NTFS objects and the plurality of associated values read into memory from the catalog.
10. The system of claim 9, wherein the processor configured to execute the read IO protocol further comprising the processor to
hash contents of a read IO block using a hashing function to generate a hash value,
storing the hash value in memory,
compare the hash value against the plurality of entries for NTFS objects in the catalog,
determine if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog,
update metadata in the slot for the read IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values,
forego an update of metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one, and
update metadata in the slot for the read IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
11. The system of claim 9, wherein the processor configured to execute the write IO protocol further comprising the processor to
hash contents of a write IO block using a hashing function to generate a hash value,
store the hash value in memory,
compare the hash value against the plurality of entries for NTFS objects in the catalog,
determine if the hash value exists as one of plurality of one of the plurality of assigned values in the catalog
update metadata in the slot for the write IO block with an assigned value of zero from the catalog if the hash value does not exist as one of the plurality of assigned values,
forego an update of metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values and the assigned value is a one, and
update metadata in the slot for the write IO block with an assigned value from the catalog if the hash value exists as one of the plurality of assigned values but the assigned value is not a one.
12. The system of claim 9, wherein each associated value comprises a score indicative of a relative importance of corresponding NTFS objects.
US14/262,366 2011-07-07 2014-04-25 Method and apparatus for servicing read and write requests using a cache replacement catalog Active US8868884B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US201161505524P true 2011-07-07 2011-07-07
US13/269,503 US8732401B2 (en) 2011-07-07 2011-10-07 Method and apparatus for cache replacement using a catalog
US14/262,366 US8868884B2 (en) 2011-07-07 2014-04-25 Method and apparatus for servicing read and write requests using a cache replacement catalog

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/262,366 US8868884B2 (en) 2011-07-07 2014-04-25 Method and apparatus for servicing read and write requests using a cache replacement catalog

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/269,503 Division US8732401B2 (en) 2011-07-07 2011-10-07 Method and apparatus for cache replacement using a catalog

Publications (2)

Publication Number Publication Date
US20140237182A1 US20140237182A1 (en) 2014-08-21
US8868884B2 true US8868884B2 (en) 2014-10-21

Family

ID=47439359

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/269,525 Active US8996800B2 (en) 2011-07-07 2011-10-07 Deduplication of virtual machine files in a virtualized desktop environment
US13/269,503 Active 2031-12-31 US8732401B2 (en) 2011-07-07 2011-10-07 Method and apparatus for cache replacement using a catalog
US14/262,366 Active US8868884B2 (en) 2011-07-07 2014-04-25 Method and apparatus for servicing read and write requests using a cache replacement catalog
US14/262,380 Active US8874851B2 (en) 2011-07-07 2014-04-25 Systems and methods for intelligent content aware caching
US14/262,357 Active US8874877B2 (en) 2011-07-07 2014-04-25 Method and apparatus for preparing a cache replacement catalog

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/269,525 Active US8996800B2 (en) 2011-07-07 2011-10-07 Deduplication of virtual machine files in a virtualized desktop environment
US13/269,503 Active 2031-12-31 US8732401B2 (en) 2011-07-07 2011-10-07 Method and apparatus for cache replacement using a catalog

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/262,380 Active US8874851B2 (en) 2011-07-07 2014-04-25 Systems and methods for intelligent content aware caching
US14/262,357 Active US8874877B2 (en) 2011-07-07 2014-04-25 Method and apparatus for preparing a cache replacement catalog

Country Status (1)

Country Link
US (5) US8996800B2 (en)

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8769236B2 (en) * 2008-04-15 2014-07-01 Microsoft Corporation Remote differential compression applied to storage
US9110785B1 (en) * 2011-05-12 2015-08-18 Densbits Technologies Ltd. Ordered merge of data sectors that belong to memory space portions
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
US9009106B1 (en) 2011-08-10 2015-04-14 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US8549518B1 (en) 2011-08-10 2013-10-01 Nutanix, Inc. Method and system for implementing a maintenanece service for managing I/O and storage for virtualization environment
US8601473B1 (en) 2011-08-10 2013-12-03 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US8863124B1 (en) 2011-08-10 2014-10-14 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US9747287B1 (en) 2011-08-10 2017-08-29 Nutanix, Inc. Method and system for managing metadata for a virtualization environment
US9652265B1 (en) 2011-08-10 2017-05-16 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
US8850130B1 (en) 2011-08-10 2014-09-30 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization
US8776043B1 (en) * 2011-09-29 2014-07-08 Amazon Technologies, Inc. Service image notifications
JP5831552B2 (en) * 2011-10-18 2015-12-09 富士通株式会社 Transfer control program, the control apparatus and transfer control method
US9235589B2 (en) * 2011-12-13 2016-01-12 International Business Machines Corporation Optimizing storage allocation in a virtual desktop environment
US9146856B2 (en) * 2012-04-10 2015-09-29 Micron Technology, Inc. Remapping and compacting in a memory device
WO2013159174A1 (en) * 2012-04-27 2013-10-31 University Of British Columbia De-duplicated virtual machine image transfer
US9772866B1 (en) 2012-07-17 2017-09-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US9405689B2 (en) * 2012-11-19 2016-08-02 Marvell World Trade Ltd. Locally caching data from a shared storage
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
US9277010B2 (en) 2012-12-21 2016-03-01 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
KR20140097924A (en) * 2013-01-30 2014-08-07 한국전자통신연구원 Method for prioritized dual caching and apparatus therefor
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9219784B2 (en) * 2013-03-07 2015-12-22 International Business Machines Corporation Synchronization of a server side deduplication cache with a client side deduplication cache
US20140324791A1 (en) * 2013-04-30 2014-10-30 Greenbytes, Inc. System and method for efficiently duplicating data in a storage system, eliminating the need to read the source data or write the target data
US20150067283A1 (en) * 2013-08-27 2015-03-05 International Business Machines Corporation Image Deduplication of Guest Virtual Machines
US9760577B2 (en) 2013-09-06 2017-09-12 Red Hat, Inc. Write-behind caching in distributed file systems
US9424058B1 (en) * 2013-09-23 2016-08-23 Symantec Corporation File deduplication and scan reduction in a virtualization environment
US20150095597A1 (en) * 2013-09-30 2015-04-02 American Megatrends, Inc. High performance intelligent virtual desktop infrastructure using volatile memory arrays
US10324754B2 (en) * 2013-11-07 2019-06-18 International Business Machines Corporation Managing virtual machine patterns
KR20150068551A (en) * 2013-12-11 2015-06-22 삼성전자주식회사 Refrigerator, mobile and method for controlling the same
US20150227602A1 (en) * 2014-02-13 2015-08-13 Actifio, Inc. Virtual data backup
US20170154109A1 (en) * 2014-04-03 2017-06-01 Spotify Ab System and method for locating and notifying a user of the music or other audio metadata
US9823842B2 (en) 2014-05-12 2017-11-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9645841B2 (en) * 2014-06-10 2017-05-09 American Megatrends, Inc. Dynamic virtual machine storage usage monitoring, provisioning, and migration
WO2016048331A1 (en) 2014-09-25 2016-03-31 Hewlett Packard Enterprise Development Lp Storage of a data chunk with a colliding fingerprint
US20160100026A1 (en) * 2014-10-07 2016-04-07 Yahoo! Inc. Fixed delay storage system and its application to networked advertisement exchange
US10185504B1 (en) * 2014-11-26 2019-01-22 Acronis International Gmbh Reducing data transmitted during backup
US9858195B2 (en) * 2014-12-10 2018-01-02 International Business Machines Corporation Near-cache distribution of manifest among peer applications in in-memory data grid (IMDG) non structured query language (NO-SQL) environments
CN105787353A (en) * 2014-12-17 2016-07-20 联芯科技有限公司 Credible application management system and loading method for credible applications
CN104881334B (en) * 2015-02-06 2018-04-10 北京华胜天成软件技术有限公司 Anti-down protection method and system for caching data
US9479265B2 (en) * 2015-02-16 2016-10-25 American Megatrends, Inc. System and method for high speed and efficient virtual desktop insfrastructure using photonics
US9934236B2 (en) * 2015-02-23 2018-04-03 International Business Machines Corporation Streamlining data deduplication
US9703713B2 (en) 2015-02-27 2017-07-11 International Business Machines Corporation Singleton cache management protocol for hierarchical virtualized storage systems
US20180196834A1 (en) * 2015-07-30 2018-07-12 Hewlett Packard Enterprise Development Lp Storing data in a deduplication store
US10089183B2 (en) 2015-07-31 2018-10-02 Hiveio Inc. Method and apparatus for reconstructing and checking the consistency of deduplication metadata of a deduplication file system
US10089320B2 (en) 2015-07-31 2018-10-02 Hiveio Inc. Method and apparatus for maintaining data consistency in an in-place-update file system with data deduplication
US10353872B2 (en) 2016-03-09 2019-07-16 Hiveio Inc. Method and apparatus for conversion of virtual machine formats utilizing deduplication metadata
CN109196457A (en) * 2016-04-11 2019-01-11 慧与发展有限责任合伙企业 It sends de-redundancy data and repairs agency
US10216445B2 (en) * 2017-06-30 2019-02-26 Intel Corporation Key-value deduplication

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603380A (en) 1983-07-01 1986-07-29 International Business Machines Corporation DASD cache block staging
US6675214B2 (en) * 1998-05-13 2004-01-06 Hewlett-Packard Development Company, L.P. Method and apparatus for efficient storage and retrieval of objects in and from an object storage device
US20040128470A1 (en) 2002-12-27 2004-07-01 Hetzler Steven Robert Log-structured write cache for data storage devices and systems
US6807619B1 (en) 2002-02-21 2004-10-19 Emc Corporation Advancing bank pointer in prime numbers unit
US20050038850A1 (en) 2002-03-06 2005-02-17 Fujitsu Limited Storage system, and data transfer method for use in the system
US20050131900A1 (en) 2003-12-12 2005-06-16 International Business Machines Corporation Methods, apparatus and computer programs for enhanced access to resources within a network
US20070005935A1 (en) 2005-06-30 2007-01-04 Khosravi Hormuzd M Method and apparatus for securing and validating paged memory system
US7269608B2 (en) * 2001-05-30 2007-09-11 Sun Microsystems, Inc. Apparatus and methods for caching objects using main memory and persistent memory
US20070266037A1 (en) 2004-11-05 2007-11-15 Data Robotics Incorporated Filesystem-Aware Block Storage System, Apparatus, and Method
US7356651B2 (en) 2004-01-30 2008-04-08 Piurata Technologies, Llc Data-aware cache state machine
US20080183986A1 (en) 2007-01-26 2008-07-31 Arm Limited Entry replacement within a data store
US20090089337A1 (en) 2007-10-01 2009-04-02 Microsoft Corporation Efficient file hash identifier computation
US20090254507A1 (en) 2008-04-02 2009-10-08 Hitachi, Ltd. Storage Controller and Duplicated Data Detection Method Using Storage Controller
US20090319772A1 (en) 2008-04-25 2009-12-24 Netapp, Inc. In-line content based security for data at rest in a network storage system
US20100188273A1 (en) 2008-11-18 2010-07-29 International Business Machines Corporation Method and system for efficient data transmission with server side de-duplication
US20100306444A1 (en) 2009-05-26 2010-12-02 Microsoft Corporation Free-Space Reduction in Cached Database Pages
US20110055471A1 (en) 2009-08-28 2011-03-03 Jonathan Thatcher Apparatus, system, and method for improved data deduplication
US20110071989A1 (en) 2009-09-21 2011-03-24 Ocarina Networks, Inc. File aware block level deduplication
US20110196900A1 (en) 2010-02-09 2011-08-11 Alexandre Drobychev Storage of Data In A Distributed Storage System
US20110276781A1 (en) 2010-05-05 2011-11-10 Microsoft Corporation Fast and Low-RAM-Footprint Indexing for Data Deduplication
US20120016845A1 (en) 2010-07-16 2012-01-19 Twinstrata, Inc System and method for data deduplication for disk storage subsystems
US8117464B1 (en) 2008-04-30 2012-02-14 Netapp, Inc. Sub-volume level security for deduplicated data
US20120054445A1 (en) 2010-08-31 2012-03-01 Oracle International Corporation Method and system for inserting cache blocks
US20120137054A1 (en) 2010-11-24 2012-05-31 Stec, Inc. Methods and systems for object level de-duplication for solid state devices
US20130124523A1 (en) 2010-09-01 2013-05-16 Robert Derward Rogers Systems and methods for medical information analysis with deidentification and reidentification
US20130166831A1 (en) 2011-02-25 2013-06-27 Fusion-Io, Inc. Apparatus, System, and Method for Storing Metadata
US20130238876A1 (en) 2012-03-07 2013-09-12 International Business Machines Corporation Efficient Inline Data De-Duplication on a Storage System
US20130282627A1 (en) 2012-04-20 2013-10-24 Xerox Corporation Learning multiple tasks with boosted decision trees

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915302B1 (en) 1999-10-01 2005-07-05 International Business Machines Corporation Method, system, and program for accessing files in a file system
US20020124137A1 (en) 2001-01-29 2002-09-05 Ulrich Thomas R. Enhancing disk array performance via variable parity based load balancing
US7133977B2 (en) 2003-06-13 2006-11-07 Microsoft Corporation Scalable rundown protection for object lifetime management
US20050108440A1 (en) 2003-11-19 2005-05-19 Intel Corporation Method and system for coalescing input output accesses to a virtual device
US7669032B2 (en) 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050114595A1 (en) 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
AU2005322350B2 (en) 2004-12-23 2010-10-21 Symantec Corporation Network packet capture distributed storage system
KR101274181B1 (en) 2006-02-13 2013-06-14 삼성전자주식회사 Device and method for managing flash memory
US8161353B2 (en) * 2007-12-06 2012-04-17 Fusion-Io, Inc. Apparatus, system, and method for validating that a correct data segment is read from a data storage device
US20100181119A1 (en) 2007-05-28 2010-07-22 Loadsense Technologies Corporation Portable modular scale system
US8880797B2 (en) 2007-09-05 2014-11-04 Emc Corporation De-duplication in a virtualized server environment
US7908436B1 (en) 2008-04-25 2011-03-15 Netapp, Inc. Deduplication of data on disk devices using low-latency random read memory
US8074045B2 (en) 2008-05-30 2011-12-06 Vmware, Inc. Virtualization with fortuitously sized shadow page tables
US8307177B2 (en) 2008-09-05 2012-11-06 Commvault Systems, Inc. Systems and methods for management of virtualization data
US7992037B2 (en) * 2008-09-11 2011-08-02 Nec Laboratories America, Inc. Scalable secondary storage systems and methods
US8495417B2 (en) 2009-01-09 2013-07-23 Netapp, Inc. System and method for redundancy-protected aggregates
JP5407430B2 (en) 2009-03-04 2014-02-05 日本電気株式会社 Storage system
US20100274772A1 (en) 2009-04-23 2010-10-28 Allen Samuels Compressed data objects referenced via address references and compression references
US8566650B2 (en) 2009-08-04 2013-10-22 Red Hat Israel, Ltd. Virtual machine infrastructure with storage domain monitoring
US9235595B2 (en) 2009-10-02 2016-01-12 Symantec Corporation Storage replication systems and methods
US8161128B2 (en) 2009-12-16 2012-04-17 International Business Machines Corporation Sharing of data across disjoint clusters
WO2011083508A1 (en) 2010-01-05 2011-07-14 Hitachi,Ltd. Storage system and its file management method
US8312471B2 (en) 2010-04-26 2012-11-13 Vmware, Inc. File system independent content aware cache
KR101694977B1 (en) 2010-12-17 2017-01-11 한국전자통신연구원 Software architecture for service of collective volume memory, and method for providing service of collective volume memory using the said software architecture
US8442955B2 (en) 2011-03-30 2013-05-14 International Business Machines Corporation Virtual machine image co-migration
US8996800B2 (en) 2011-07-07 2015-03-31 Atlantis Computing, Inc. Deduplication of virtual machine files in a virtualized desktop environment
US9626224B2 (en) 2011-11-03 2017-04-18 Silver Peak Systems, Inc. Optimizing available computing resources within a virtual environment

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603380A (en) 1983-07-01 1986-07-29 International Business Machines Corporation DASD cache block staging
US6675214B2 (en) * 1998-05-13 2004-01-06 Hewlett-Packard Development Company, L.P. Method and apparatus for efficient storage and retrieval of objects in and from an object storage device
US7269608B2 (en) * 2001-05-30 2007-09-11 Sun Microsystems, Inc. Apparatus and methods for caching objects using main memory and persistent memory
US6807619B1 (en) 2002-02-21 2004-10-19 Emc Corporation Advancing bank pointer in prime numbers unit
US20050038850A1 (en) 2002-03-06 2005-02-17 Fujitsu Limited Storage system, and data transfer method for use in the system
US20040128470A1 (en) 2002-12-27 2004-07-01 Hetzler Steven Robert Log-structured write cache for data storage devices and systems
US20050131900A1 (en) 2003-12-12 2005-06-16 International Business Machines Corporation Methods, apparatus and computer programs for enhanced access to resources within a network
US7356651B2 (en) 2004-01-30 2008-04-08 Piurata Technologies, Llc Data-aware cache state machine
US20070266037A1 (en) 2004-11-05 2007-11-15 Data Robotics Incorporated Filesystem-Aware Block Storage System, Apparatus, and Method
US20070005935A1 (en) 2005-06-30 2007-01-04 Khosravi Hormuzd M Method and apparatus for securing and validating paged memory system
US20080183986A1 (en) 2007-01-26 2008-07-31 Arm Limited Entry replacement within a data store
US20090089337A1 (en) 2007-10-01 2009-04-02 Microsoft Corporation Efficient file hash identifier computation
US8495288B2 (en) 2008-04-02 2013-07-23 Hitachi, Ltd. Storage controller and duplicated data detection method using storage controller
US20090254507A1 (en) 2008-04-02 2009-10-08 Hitachi, Ltd. Storage Controller and Duplicated Data Detection Method Using Storage Controller
US20090319772A1 (en) 2008-04-25 2009-12-24 Netapp, Inc. In-line content based security for data at rest in a network storage system
US8117464B1 (en) 2008-04-30 2012-02-14 Netapp, Inc. Sub-volume level security for deduplicated data
US20100188273A1 (en) 2008-11-18 2010-07-29 International Business Machines Corporation Method and system for efficient data transmission with server side de-duplication
US20100306444A1 (en) 2009-05-26 2010-12-02 Microsoft Corporation Free-Space Reduction in Cached Database Pages
US20110055471A1 (en) 2009-08-28 2011-03-03 Jonathan Thatcher Apparatus, system, and method for improved data deduplication
US20110071989A1 (en) 2009-09-21 2011-03-24 Ocarina Networks, Inc. File aware block level deduplication
US20110196900A1 (en) 2010-02-09 2011-08-11 Alexandre Drobychev Storage of Data In A Distributed Storage System
US20110276781A1 (en) 2010-05-05 2011-11-10 Microsoft Corporation Fast and Low-RAM-Footprint Indexing for Data Deduplication
US20120016845A1 (en) 2010-07-16 2012-01-19 Twinstrata, Inc System and method for data deduplication for disk storage subsystems
US20120054445A1 (en) 2010-08-31 2012-03-01 Oracle International Corporation Method and system for inserting cache blocks
US20130124523A1 (en) 2010-09-01 2013-05-16 Robert Derward Rogers Systems and methods for medical information analysis with deidentification and reidentification
US20120137054A1 (en) 2010-11-24 2012-05-31 Stec, Inc. Methods and systems for object level de-duplication for solid state devices
US20130166831A1 (en) 2011-02-25 2013-06-27 Fusion-Io, Inc. Apparatus, System, and Method for Storing Metadata
US20130238876A1 (en) 2012-03-07 2013-09-12 International Business Machines Corporation Efficient Inline Data De-Duplication on a Storage System
US20130282627A1 (en) 2012-04-20 2013-10-24 Xerox Corporation Learning multiple tasks with boosted decision trees

Also Published As

Publication number Publication date
US8874851B2 (en) 2014-10-28
US8874877B2 (en) 2014-10-28
US8732401B2 (en) 2014-05-20
US20140237182A1 (en) 2014-08-21
US8996800B2 (en) 2015-03-31
US20140237181A1 (en) 2014-08-21
US20130013844A1 (en) 2013-01-10
US20140237183A1 (en) 2014-08-21
US20130013865A1 (en) 2013-01-10

Similar Documents

Publication Publication Date Title
US8627012B1 (en) System and method for improving cache performance
CN103907097B (en) The method of the multi-tier cache memory subsystem, and a control means
US8478725B2 (en) Method and system for performing live migration of persistent data of a virtual machine
JP5592942B2 (en) Shortcut input and output in a virtual machine system
US9824018B2 (en) Systems and methods for a de-duplication cache
US20150347434A1 (en) Reducing metadata in a write-anywhere storage system
US9489265B2 (en) Method and system for frequent checkpointing
US8549241B2 (en) Method and system for frequent checkpointing
JP4008826B2 (en) Apparatus for cache compression engine to increase the effective cache size by data compression of on-chip cache
US9158578B1 (en) System and method for migrating virtual machines
EP2531922B1 (en) Dynamic management of destage tasks in a storage controller
Joo et al. FAST: Quick Application Launch on Solid-State Drives.
Luo et al. Live and incremental whole-system migration of virtual machines using block-bitmap
US9575688B2 (en) Rapid virtual machine suspend and resume
US8645626B2 (en) Hard disk drive with attached solid state drive cache
US20160253201A1 (en) Saving and Restoring State Information for Virtualized Computer Systems
US9507732B1 (en) System and method for cache management
US20080235477A1 (en) Coherent data mover
US20030135729A1 (en) Apparatus and meta data caching method for optimizing server startup performance
CN102707900B (en) Virtual disk storage technology
US8566547B2 (en) Using a migration cache to cache tracks during migration
US20120304171A1 (en) Managing Data Input/Output Operations
Byan et al. Mercury: Host-side flash caching for the data center
US8484405B2 (en) Memory compression policies
Jo et al. Efficient live migration of virtual machines using shared storage

Legal Events

Date Code Title Description
AS Assignment

Owner name: ATLANTIS COMPUTING, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VENKATESH, CHETAN;DIXIT, SAGAR SHYAM;REEL/FRAME:033143/0553

Effective date: 20120224

AS Assignment

Owner name: ATLANTIS COMPUTING, INC., CALIFORNIA

Free format text: CHANGE OF ADDRESS;ASSIGNOR:ATLANTIS COMPUTING, INC.;REEL/FRAME:033754/0922

Effective date: 20140916

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: ATLANTIS COMPUTING HOLDINGS, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ATLANTIS COMPUTING, INC.;INSOLVENCY SERVICES GROUP, INC.;REEL/FRAME:043716/0766

Effective date: 20170726

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551)

Year of fee payment: 4