KR20160118402A - Access Pattern based Cache Management Method for VDI Mass Data Processing - Google Patents
Access Pattern based Cache Management Method for VDI Mass Data Processing Download PDFInfo
- Publication number
- KR20160118402A KR20160118402A KR1020150045931A KR20150045931A KR20160118402A KR 20160118402 A KR20160118402 A KR 20160118402A KR 1020150045931 A KR1020150045931 A KR 1020150045931A KR 20150045931 A KR20150045931 A KR 20150045931A KR 20160118402 A KR20160118402 A KR 20160118402A
- Authority
- KR
- South Korea
- Prior art keywords
- access pattern
- vdi
- data
- cache
- management method
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0674—Disk device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a VDI (Virtual Desktop Infrastructure) technology, and more particularly, to a cache management method for large-capacity data processing in a VDI and a server using the same.
In the VDI environment, cache saturation often occurs due to an increase in the processing of guest OS and user data requests. The cache is designed to process data faster. In this case, the operation of the cache causes a delay in the I / O processing, resulting in a decrease in the service speed.
Currently, in a small VDI environment, cache operation enables efficient data processing only by hashing-based search and operation for I / O request processing. However, in a VDI environment where a large number of users access, data with a high hit ratio is lost from the cache due to low utilization frequency for each user.
Therefore, the adverse effect of the cache operation and the cache space management become unreasonable, and a method for solving the problem is required.
SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and it is an object of the present invention to provide a method and apparatus for efficiently managing a cache space by using a data access pattern of a user based on a data access request in a VDI environment. The present invention provides a method for enhancing the performance of a system.
According to an aspect of the present invention, there is provided a cache management method comprising: generating a data access pattern of a VDI user; And loading the selected data on the second disk based on the access pattern among the data stored in the first disk.
The access pattern may be a data set requested by the user.
The cache management method according to an embodiment of the present invention may further include deleting data loaded on the second disk based on the access pattern.
The loading step may be performed on an access pattern whose occurrence frequency is higher than a reference, and the deleting step may be performed on an access pattern whose occurrence frequency is lower than a reference.
The access pattern may be generated in units of programs.
According to another aspect of the present invention, there is provided a server comprising: a management module for generating a data access pattern of a VDI user; And a cache module for loading a part of data stored on the first disk based on the access pattern on the second disk.
As described above, according to the embodiments of the present invention, performance can be improved by more efficiently managing the cache space using the data access pattern of the user collected based on the data access request in the VDI environment.
1 is a diagram illustrating a VDI environment to which the present invention is applicable;
Fig. 2 is a detailed block diagram of the VDI system shown in Fig. 1,
3 is a flowchart provided in the description of a cache management method according to an embodiment of the present invention.
Hereinafter, the present invention will be described in detail with reference to the drawings.
1 is a diagram illustrating a Virtual Desktop Infrastructure (VDI) environment to which the present invention is applicable. 1, a VDI environment to which the present invention is applicable includes a plurality of VDI clients 10-1, 10-2, 10-3, ... 10-n and a
The VDI clients 10-1, 10-2, 10-3, ... 10-n provide a virtual desktop service to the users utilizing the resources of the VDI
The HDD 120 is a large-capacity disk in which an OS and various programs are stored. The SSD 130 functions as a cache in the
2 is a detailed block diagram of the
The
The block level
In this process, the block level
The access
The access pattern means a data set that the VDI clients 10-1, 10-2, 10-3, ... 10-n have requested to access. This can be the data set (exe file, dll, lib) needed to run the program.
The access
Then, the access
Hereinafter, a cache management method by the access
As shown in FIG. 3, first, the access
The access
If it is determined that the occurrence frequency of the access pattern is higher than the reference (S220-Y), the access
Step S230 is an operation of loading data into the cache before a user's request is made. Specifically, the access
If the data included in the access pattern is already loaded in the
Thereby, the warm data having a high hit rate is prevented from being lost in the SSD 130, and can be restored even if it is lost during the cache utilization process.
On the other hand, when it is determined that the occurrence frequency of the access pattern is less than the reference (S240-Y), the access
In step S250, the access
If the data included in the access pattern is not loaded in the
Up to now, a preferred embodiment of the access pattern-based cache management method for VDI large data processing has been described in detail.
The access pattern mentioned in the above embodiment is preferably generated / updated in units of programs and managed. The program for managing the access pattern is preferably limited to those having a high access frequency by the user.
Furthermore, in performing cache management, temporal data that is not included in the access pattern may be periodically deleted by the access
Also, in the embodiment of the present invention, the HDD 120 and the SSD 130 are referred to as a kind of disk, and it is needless to say that the
The VDI environment assumed in the embodiment of the present invention is also only an example. It goes without saying that the technical idea of the present invention can be applied to a server / infrastructure environment other than the VDI environment as well as a PC environment.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.
10-1, 10-2, 10-3, ... 10-n: VDI client
100: VDI system 110: VDI server
111: guest OS 113: hypervisor
115: access pattern management module
117: Block level virtual cache module
120: HDD 130: SSD
Claims (6)
And loading the selected data on the second disk based on the access pattern among the data stored in the first disk.
The access pattern includes:
Wherein the data set is a data set requested by the user.
And deleting data loaded on the second disk based on the access pattern.
Wherein the loading step is performed for an access pattern whose occurrence frequency is higher than a reference,
Wherein said deleting step is performed for an access pattern whose occurrence frequency is less than a criterion.
The access pattern includes:
Wherein the program is generated in units of programs.
And a cache module for loading a part of the data stored in the first disk based on the access pattern on the second disk.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150045931A KR20160118402A (en) | 2015-04-01 | 2015-04-01 | Access Pattern based Cache Management Method for VDI Mass Data Processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150045931A KR20160118402A (en) | 2015-04-01 | 2015-04-01 | Access Pattern based Cache Management Method for VDI Mass Data Processing |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160118402A true KR20160118402A (en) | 2016-10-12 |
Family
ID=57173303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150045931A KR20160118402A (en) | 2015-04-01 | 2015-04-01 | Access Pattern based Cache Management Method for VDI Mass Data Processing |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160118402A (en) |
-
2015
- 2015-04-01 KR KR1020150045931A patent/KR20160118402A/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11061584B2 (en) | Profile-guided data preloading for virtualized resources | |
AU2017258659B2 (en) | Multi-Cluster Warehouse | |
US20170336998A1 (en) | Method and system for sharing storage resource | |
US9489389B2 (en) | System and method for maintaining cache coherency | |
US20160210067A1 (en) | File system snapshot data management in a multi-tier storage environment | |
US10715622B2 (en) | Systems and methods for accelerating object stores with distributed caching | |
US20170255527A1 (en) | Live rollback for a computing environment | |
WO2017190084A1 (en) | Offloading storage encryption operations | |
JP2014175009A (en) | System, method and computer-readable medium for dynamic cache sharing in flash-based caching solution supporting virtual machines | |
US20140281301A1 (en) | Elastic hierarchical data storage backend | |
US10635604B2 (en) | Extending a cache of a storage system | |
US11204702B2 (en) | Storage domain growth management | |
JP6774971B2 (en) | Data access accelerator | |
US10298709B1 (en) | Performance of Hadoop distributed file system operations in a non-native operating system | |
US11442927B1 (en) | Storage performance-based distribution of deduplicated data to nodes within a clustered storage environment | |
US20140082275A1 (en) | Server, host and method for reading base image through storage area network | |
CN110806911A (en) | Cloud desktop management and control method, device and system | |
KR101918806B1 (en) | Cache Management Method for Optimizing the Read Performance of Distributed File System | |
Lai et al. | Io performance interference among consolidated n-tier applications: Sharing is better than isolation for disks | |
CN112579550B (en) | Metadata information synchronization method and system of distributed file system | |
US20150212847A1 (en) | Apparatus and method for managing cache of virtual machine image file | |
US11272006B2 (en) | Intelligently distributing retrieval of recovery data amongst peer-based and cloud-based storage sources | |
US11055262B1 (en) | Extensible streams on data sources | |
JP2013214201A (en) | Garbage collection execution device, garbage collection execution method and garbage collection execution program | |
US9852139B1 (en) | Directory partitioning with concurrent directory access |