CN110209600B - CACHE space distribution method and system based on simplified LUN - Google Patents

CACHE space distribution method and system based on simplified LUN Download PDF

Info

Publication number
CN110209600B
CN110209600B CN201810169054.9A CN201810169054A CN110209600B CN 110209600 B CN110209600 B CN 110209600B CN 201810169054 A CN201810169054 A CN 201810169054A CN 110209600 B CN110209600 B CN 110209600B
Authority
CN
China
Prior art keywords
lun
metadata
cache space
data
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810169054.9A
Other languages
Chinese (zh)
Other versions
CN110209600A (en
Inventor
唐建军
王婵娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macrosan Technologies Co Ltd
Original Assignee
Macrosan Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macrosan Technologies Co Ltd filed Critical Macrosan Technologies Co Ltd
Priority to CN201810169054.9A priority Critical patent/CN110209600B/en
Publication of CN110209600A publication Critical patent/CN110209600A/en
Application granted granted Critical
Publication of CN110209600B publication Critical patent/CN110209600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a CACHE space distribution method based on a simplified LUN, which comprises the following steps: when the thin LUN is created as required, the thin LUN metadata is created, and metadata CACHE space is distributed for the thin LUN metadata from the current CACHE; and creating a reduced LUN physical space, wherein the reduced LUN physical space is used for storing user data, and distributing a user data CACHE space for the reduced LUN physical space according to the number of the LUNs contained in the reduced LUN and the remaining free CACHE space in the current CACHE. Compared with the prior art, enough CACHE space can be allocated for the reduced LUN metadata, when the reduced LUN processes a read data request or a write data request, the reduced LUN metadata is directly accessed from the CACHE space, and the read-write performance of the reduced LUN is improved.

Description

CACHE space distribution method and system based on simplified LUN
Technical Field
The application relates to the field of storage, in particular to a CACHE space allocation method and system based on a simplified LUN.
Background
Currently, a Logical Unit Number (LUN) to which the thin provisioning technique is applied is called a thin LUN, and the use of the thin provisioning technique can simplify the configuration management of storage resources and save physical storage resources. In thin provisioning techniques, the portion of data used to manage the mapping of logical and physical addresses of thin LUNs is referred to as metadata. When the thin LUN processes a read data request or a write data request, firstly, the metadata is accessed according to the logical address, and whether the logical address is mapped is judged according to the mapping relation recorded by the metadata, so that data is read from the physical space or data is written into the physical space. On the basis of the thin provisioning technology, the storage system usually uses the physical memory for the CACHE (i.e. CACHE), and provides the CACHE technology for accessing the physical space allocated to the metadata and the physical space allocated to the thin LUN, i.e. allocating a certain CACHE space for the physical space allocated to the metadata and the physical space allocated to the thin LUN, and by means of the CACHE technology, the performance of the whole storage system is optimized by effectively utilizing the high-speed access characteristic of the physical memory.
When a thin LUN is created, the existing technical scheme of CACHE space allocation for a thin LUN allocates a physical space for the thin LUN in a manner of allocation as needed, then allocates a certain physical space for metadata, and finally dynamically allocates a CACHE space corresponding to the physical space of the thin LUN and a CACHE space corresponding to the physical space of the metadata according to the number of LUNs (including the thin LUN and the LUN) included in the current storage system. Because the capacity of the CACHE space supported by the device is limited, under the condition that the total capacity of the CACHE space is limited, if other LUNs are fixedly allocated or the number of LUNs included in the storage system is large, the physical space of the metadata in the reduced LUN may not be allocated with enough CACHE space, when the reduced LUN processes a read data request or a write data request, the metadata may not be accessed from the CACHE space, and the metadata can only be directly accessed from the physical space of the metadata, which will affect the read-write performance of the reduced LUN.
Disclosure of Invention
In view of this, the present application provides a CACHE space allocation method and system based on a reduced LUN.
Specifically, the method is realized through the following technical scheme:
a CACHE space allocation method based on a reduced LUN, the method comprising:
when the thin LUN is created as required, the thin LUN metadata is created, and metadata CACHE space is distributed for the thin LUN metadata from the current CACHE;
and creating a reduced LUN physical space, wherein the reduced LUN physical space is used for storing user data, and distributing a user data CACHE space for the reduced LUN physical space according to the number of the LUNs contained in the reduced LUN and the remaining free CACHE space in the current CACHE.
A reduced LUN based CACHE space allocation system, the system comprising:
a metadata creating unit for creating a thin LUN metadata when creating a thin LUN as required;
the first CACHE space allocation unit allocates metadata CACHE space for the reduced LUN metadata from the current CACHE;
a physical space creating unit, configured to create a reduced LUN physical space, where the reduced LUN physical space is used to store user data;
and the second CACHE space allocation unit is used for allocating user data CACHE space for the physical space of the reduced LUN according to the number of the LUNs contained in the reduced LUN and the remaining free CACHE space in the current CACHE.
According to the technical scheme, when the reduced LUN is created as required, the metadata CACHE space is fixedly distributed for the reduced LUN metadata according to the size of the physical space distributed for the reduced LUN metadata, and the user data CACHE space is dynamically distributed for the physical space of the reduced LUN according to the number of the LUNs contained in the current storage system and the remaining free CACHE space in the current CACHE. Compared with the prior art, enough metadata CACHE space can be allocated for the metadata of the reduced LUN, when the reduced LUN processes a read data request or a write data request, the metadata is directly accessed from the metadata CACHE space, and the read-write performance of the reduced LUN is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flowchart illustrating an implementation of a CACHE space allocation method for reducing LUNs according to an exemplary embodiment of the present application;
FIG. 2 is a metadata CACHE spatial data refresh flow diagram shown in an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating a user data CACHE space data write process according to an exemplary embodiment of the present application;
FIG. 4 is a diagram illustrating a method for detecting the amount of current metadata CACHE space data according to an exemplary embodiment of the present application;
FIG. 5 illustrates a write data request record representation intent in accordance with an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of a CACHE space allocation system for reducing LUNs according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Aiming at the problems provided in the background technology, when CACHE space allocation is carried out, the method preferentially creates the reduced LUN metadata and allocates the metadata CACHE space for the reduced LUN metadata from the CACHE space, wherein in order to distinguish the CACHE space corresponding to the reduced LUN metadata and the CACHE space corresponding to the reduced LUN physical space, the reduced LUN metadata and the CACHE space are respectively named as metadata CACHE space and user data CACHE space, wherein the metadata CACHE space is used for caching the metadata, and the user data CACHE space allocated for the reduced LUN physical space subsequently is used for caching the user data; and after the metadata of the reduced LUN is distributed to enough CACHE space, creating a reduced LUN physical space, and distributing user data CACHE space for the reduced LUN physical space according to the number of the LUNs contained in the reduced LUN and the residual free CACHE space. For purposes of illustration of the present application, the following examples are now provided:
referring to fig. 1, an implementation flowchart of a CACHE space allocation method based on a reduced LUN is provided for the present application, and the method may include the following steps:
s101, when a simplified LUN is created according to needs, simplified LUN metadata is created, and metadata CACHE space is distributed for the simplified LUN metadata from the current CACHE;
the LUN applying the thin provisioning technology is called a thin LUN, when thin provisioning is created as required, a large logic space is assigned to the thin LUN, however, physical space actually allocated to the thin LUN is small, with the increase of data writing quantity, the physical space of the thin LUN is continuously mapped to the logic space, when the physical space of the thin LUN is continuously used and the residual physical space is smaller than a certain threshold value, a physical space allocation mechanism is triggered, and a plurality of physical spaces are allocated from a storage pool to be owned by the thin LUN, so that the physical space is dynamically allocated. Accordingly, the logical address and the physical address in the thin LUN are not mapped in a one-to-one correspondence manner, and in order to manage the mapping relationship between the logical address and the physical address in the thin LUN, the management data is called thin LUN metadata.
Therefore, when the thin LUN is created as needed, the thin LUN metadata is created, and a certain physical space needs to be allocated for the thin LUN metadata to store the metadata. When the data reading request is processed by the simplified LUN, firstly, the metadata is accessed according to the logical address, whether the logical address is mapped or not is judged, if yes, the physical space is accessed, the data is read, and if not, the all-zero data is directly returned; when the data writing request is processed by the simplified LUN, firstly, the metadata is accessed according to the logical address, whether the virtual address is mapped or not is judged, if yes, the data is directly written into the physical space, otherwise, according to the size of the data carried in the data writing request, n physical spaces with the extend as a unit are distributed from the rest physical space for mapping, the metadata is recorded, and then the data is written into the corresponding physical space.
Based on the consideration of the read-write performance of the reduced LUN, CACHE CACHE support is provided for the reduced LUN metadata, and corresponding metadata CACHEH space is distributed for the reduced LUN metadata.
Based on the principle, the metadata CACHE space corresponding to the reduced LUN metadata adopts a fixed allocation strategy, and when the reduced LUN is created, the CACHE space is fixedly allocated for the reduced LUN metadata. The specific metadata CACHE space allocation method comprises the following steps: and according to the size of the allocated physical space for the metadata, allocating equal amount of metadata CACHE space for the metadata of the reduced LUN from CACHE fixedly.
S102, creating a reduced LUN physical space, wherein the reduced LUN physical space is used for storing user data, and distributing user data CACHE space for the reduced LUN physical space according to the number of LUNs included in the reduced LUN and the remaining free CACHE space in the current CACHE.
After the corresponding metadata CACHE space is preferentially allocated to the thin LUN metadata, the corresponding user data CACHE space may be allocated to the thin LUN physical space in the remaining CACHE space. In view of that the corresponding metadata CACHE space is fixedly allocated to the reduced LUN metadata, the user data CACHE space corresponding to the physical space of the reduced LUN is dynamically allocated according to the currently remaining free CACHE space and the number of LUNs included in the reduced LUN, wherein the dynamic allocation of the present application may be an average allocation.
By the technical scheme, enough metadata CACHE space is allocated to the metadata of the simplified LUN in advance, so that when the simplified LUN processes a read data request or a write data request, the metadata is directly accessed from the metadata CACHE space, and the read-write performance of the simplified LUN is improved.
In addition, in order to further optimize the read-write performance of the reduced LUN, the metadata CACHE spatial data refresh and the user data CACHE spatial data write are optimized, respectively, as shown in fig. 2 and fig. 3, where the metadata CACHE spatial data refresh specifically includes the following steps:
s201a, detecting the metadata CACHE space;
the metadata CACHE space is used for caching metadata, and dirty data (i.e., metadata which is rarely accessed) cached in the metadata CACHE space needs to be refreshed into a physical space to which the reduced LUN metadata is allocated, so that the metadata CACHE space can be detected in real time, and the metadata CACHE space can also be periodically detected according to actual conditions.
S201b, when detecting that the data volume in the metadata CACHE space is larger than or equal to a first threshold, selecting n data from the metadata CACHE space to refresh into the physical space allocated for the simplified LUN metadata, wherein the remaining data volume of the metadata CACHE space after selecting n data to refresh is not larger than a second threshold;
for the metadata CACHE space, it can be visually regarded as a certain container, and for the data (metadata) stored, it can be visually regarded as water stored in the container, as shown with reference to fig. 4. Detecting a metadata CACHE space, and when detecting that the data volume contained in the metadata CACHE space is larger than or equal to a first threshold value, namely when detecting that water contained in a container is larger than or equal to a certain high water level line, selecting n data (metadata, namely dirty data) from the metadata CACHE space and refreshing the n data into a physical space allocated for the metadata, wherein after selecting n data to refresh, the residual data volume in the metadata CACHE space is not larger than a second threshold value. After the data is refreshed, the data in the metadata CACHE space is not refreshed to the physical space to which the simplified LUN metadata is distributed, and a second threshold value is set in the metadata CACHE space to ensure that the remaining metadata in the metadata CACHE space can be accessed. It can be seen that setting the first threshold can be regarded as a trigger mechanism for data refresh, and setting the second threshold ensures that the remaining metadata in the metadata CACHE space can be accessed, and the first threshold and the second threshold can be dynamically adjusted.
S201c, releasing CACHE space occupied by the selected n data in metadata CACHE space, and returning to detect the metadata CACHE space;
after the selected n data are refreshed to the physical space to which the reduced LUN metadata has been allocated, the CACHE space occupied by the selected n data needs to be timely recovered, so as to CACHE new data. Therefore, it is necessary to release the CACHE space occupied by the selected n data in the metadata CACHE space and return to continuously detect the metadata CACHE space, so as to perform data refreshing in time when the data amount of the metadata CACHE space is greater than or equal to the first threshold value next time.
The writing of the user data CACHE space data specifically comprises the following steps:
s301a, when the reduced LUN receives a write data request issued by a server, accessing a user data CACHE space allocated for the physical space of the reduced LUN, and judging whether a flow model is a sequential flow model in the CACHE space, wherein the flow model is used for describing the write data request issued by the server;
generally, the data writing request issued by the server may be regarded as a flow, and a flow model is adopted to describe the data writing request issued by the server. The server issues a write data request, and the logical address corresponding to the write data request issued in the history may be continuous, random, or pseudo-random. When the simplified LUN receives a data writing request sent by a server, the data writing request firstly enters a user data CACHE space, and at the moment, whether the flow model is a sequential flow model is judged.
S301b, if yes, writing the data to be written in the data writing request sent by the server into the user data CACHE space;
and if the flow model is a sequential flow model, namely the logic address corresponding to the currently issued data writing request is continuous with the logic address corresponding to the historically issued data writing request, writing the data to be written in the data writing request issued by the server into a user data CACHE space.
S301c, if not, judging whether the flow model is a pseudo-random flow model;
and if the logic address corresponding to the currently issued write data request is not continuous (possibly random or pseudo-random) with the logic address corresponding to the historically issued write data request, judging whether the flow model is a pseudo-random flow model.
The algorithm for specifically judging whether the flow model is a pseudorandom flow model is as follows: judging whether the logic address corresponding to the currently received write data request is continuous with the recorded logic address corresponding to one write data request; if yes, marking the currently received write data request in a locally preset write data request record table, and recording the currently received write data request at a flag position 1 or 0, wherein the preset write data request record table records a fixed number of write data requests; and if the number of the marked write data requests accumulated in the history in the current recording table exceeds a preset ratio (the proportion is set according to the actual situation) in the current recording table, judging that the flow model is a pseudo-random flow model, otherwise, judging that the flow model is not the pseudo-random flow model.
The current received write data request is recorded, the write data request recorded firstly in the record table is eliminated and replaced by the current received write data request.
In consideration of system performance, the latest m write data requests can be preferentially intercepted from the recorded write data requests, whether the logical address corresponding to the currently received write data request is continuous with the logical address corresponding to any one of the m write data requests is judged, and the first recorded write data request in the m write data requests is correspondingly eliminated and replaced by the currently received write data request.
S301d, if the flow model is judged to be the pseudo-random flow model, writing the data to be written in the data writing request issued by the server into the user data CACHE space, otherwise, writing the data into the reduced LUN physical space.
Through the pseudo-random flow model algorithm, if the flow model is judged to be the pseudo-random flow model, writing a write data request issued by a server into a user data CACHE space for caching, otherwise, directly writing the write data request into a simplified LUN physical space for storage.
For better illustration of the present application, the following description is given with reference to a specific example to illustrate the present application.
Assuming the current storage pool physical space capacity is 1T, CACHE space 10G. The thin LUN is created as required, the logical space of the thin LUN is designated as 100G, and the physical space actually allocated to the thin LUN is 10G. Here, the reduced LUN logical space is divided into 100 logical blocks and the reduced LUN physical space is divided into 10 physical blocks by taking 1G as a unit, and the corresponding logical address can be regarded as a natural number from 1 to 100, and the physical address can be regarded as a natural number from 1 to 10.
When the thin LUN is created as required, the thin LUN metadata is created, and according to the size of the physical space required by the thin LUN metadata, 2G (the thin LUN metadata is a mapping relation and occupies a small physical space) physical space is divided from the storage pool and is owned by the thin LUN metadata. According to the 2G physical space in which the reduced LUN metadata is divided, the equal 2G CACHE space is allocated from the CACHE space in a ratio of 1:1 to be owned by the reduced LUN metadata.
A thin LUN physical space is created, i.e., the 10G physical space actually allocated to, where the 10G physical space is used to store user data. And allocating user data CACHE space for the physical space of the reduced LUN according to the number of LUNs contained in the currently created reduced LUN (if 4 reduced LUNs are created, 4 LUNs are contained), and the remaining 8G CACHE space. Here, a dynamic allocation is adopted, that is, each thin LUN physical space is allocated to 2G.
For metadata CACHE space, which is allocated to 2G, where some of the cached thin LUN metadata may not be accessed frequently, it needs to be flushed into the 2G physical space that has been allocated. The specific refresh algorithm is as follows: in the metadata CACHE space, there are high water line and low water line, where the high water line is default to 95%, and the low water line is default to 80%, where the high water line and the low water line can be dynamically adjusted according to the actual situation. Detecting a metadata CACHE space in real time, when detecting that the data volume of the metadata CACHE space is larger than or equal to a high water line, namely the CACHE space occupied by the cached reduced LUN metadata is larger than or equal to 2 x 0.95-1.9G, selecting n reduced LUN metadata from the metadata CACHE space to refresh into 2G physical space, wherein after selecting n reduced LUN metadata to refresh, the residual data volume of the metadata CACHE space is not larger than the low water line, namely the CACHE space occupied by the residual data volume is not larger than 2 x 0.8-1.6G.
After n pieces of reduced LUN metadata are selected for refreshing, the CACHE space occupied by the n pieces of reduced LUN metadata, namely 0.3G is released, and the detection of the metadata CACHE space is returned continuously.
When data to be written in a data writing request sent by a server is written in a user data CACHE space, the method comprises the following steps:
accessing a user data CACHE space, judging whether a flow model is a sequential flow model in the user data CACHE space, specifically assuming that logic addresses corresponding to a write data request issued by a server before are respectively 1, 2, 3, 4 and 5, and a logic address corresponding to a write data request issued by a current server is 6, judging that the flow model is the sequential flow model, and writing data to be written in the write data request issued by the server into the user data CACHE space;
if not, judging whether the traffic model is a pseudo-random traffic model, specifically assuming that the recorded logical address corresponding to the write data request is 1, 3, 5, 6, 7 … …, and the logical address corresponding to the currently received write data request is 2, intercepting the currently latest 5 write data requests, the corresponding logical address being 1, 3, 5, 6, 7, in order to reduce the consumption of system performance as much as possible, wherein it can be seen that 2 is continuous with 3, marking the currently received write data request in a local write data request recording table, eliminating the write data request recorded first, replacing the currently received write data request with the write data request recorded first, and accordingly setting the flag bit to be 1 (or 0), wherein the recording table has a fixed length of 100, and the schematic diagram is shown in fig. 5;
if the number of the flag bits 1 in the current recording list is 75, the percentage in the current recording list is 75%, the percentage exceeds the preset percentage (assuming 70%), the flow model is determined to be a pseudo-random flow model, the data to be written in the data writing request issued by the server is written into the user data CACHE space, if the number of the flag bits 1 in the current recording list is 50, and the percentage does not exceed 70%, the data to be written in the data writing request issued by the server is determined not to be the pseudo-random flow model, and the data to be written in the data writing request issued by the server is written into the reduced LUN physical space.
Compared with the prior art, the method has the advantages that enough metadata CACHE space is preferentially ensured to be allocated to the metadata of the reduced LUN, the refreshing of the metadata CACHE space data and the writing of the user data CACHE space data are further optimized, and the reading and writing performance of the reduced LUN is greatly improved.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Compared with the above method embodiments, the present application further provides a CACHE space allocation system based on a reduced LUN, as shown in fig. 6, including a metadata creating unit 601, a first CACHE space allocation unit 602, a physical space creating unit 603, and a second CACHE space allocation unit 604.
The metadata creating unit 601 is configured to create LUN-reduced metadata when creating LUNs on demand;
the first CACHE space allocation unit 602 allocates metadata CACHE space for the reduced LUN metadata from the current CACHE;
the physical space creating unit 603 is configured to create a reduced LUN physical space, where the reduced LUN physical space is used to store user data;
the second CACHE space allocation unit 604 is configured to allocate a user data CACHE space for the physical space of the reduced LUN according to the number of LUNs included in the reduced LUN and the remaining free CACHE space in the current CACHE.
In a specific embodiment of the present application, the first CACHE space allocating unit 602 is specifically configured to:
and according to the size of the allocated physical space of the reduced LUN metadata, fixedly allocating equivalent metadata CACHE space for the reduced LUN metadata from the current CACHE.
In one embodiment of the present application, the system further comprises:
a space detection unit 605 configured to detect the metadata CACHE space;
a data refreshing unit 606, configured to, when it is detected that the data amount in the metadata CACHE space is greater than or equal to a first threshold, select n data from the metadata CACHE space and refresh the n data into the physical space allocated to the reduced LUN metadata, where a remaining data amount in the metadata CACHE space after the n data are selected and refreshed is not greater than a second threshold;
and a space releasing unit 607, configured to release the CACHE space occupied by the selected n data in the metadata CACHE space, and return to detecting the metadata CACHE space.
In one embodiment of the present application, the system further comprises:
a first judging unit 608, configured to, when the thin LUN receives a write data request issued by a server, access a user data CACHE space allocated to a physical space of the thin LUN, and judge whether a traffic model is a sequential traffic model in the CACHE space, where the traffic model is used to describe the write data request issued by the server;
a first data writing unit 609, configured to write data to be written in a data writing request issued by a server into the user data CACHE space if the data is written;
a second judging unit 610, configured to, if not, judge whether the traffic model is a pseudorandom traffic model;
a second data writing unit 611, configured to write data to be written in a data writing request issued by the server into the user data CACHE space if the traffic model is determined to be the pseudorandom traffic model, and write the data into the reduced LUN physical space if the traffic model is not determined to be the pseudorandom traffic model.
In a specific embodiment of the present application, the second determining unit 610 specifically includes:
an address continuity judging subunit 6101, configured to, if not, judge whether a logical address corresponding to the currently received write data request is continuous with a logical address corresponding to one recorded write data request;
a marking subunit 6102, configured to mark, if yes, the currently received write data request in the locally preset write data request record table;
a recording subunit 6103, configured to record the currently received write data requests, where a preset write data request recording table records a fixed number of write data requests;
a traffic model determining subunit 6104, configured to determine that the traffic model is a pseudo-random traffic model if the number of marked write data requests accumulated historically in the current record table exceeds a preset duty ratio in the current record table, and otherwise determine that the traffic model is not a pseudo-random traffic model.
In an embodiment of the present application, the recording subunit 6103 is specifically configured to:
and eliminating the write data request recorded first in the write data requests, and replacing the write data request with the currently received write data request.
The implementation process of the functions of each unit in the system is specifically described in the implementation process of the corresponding step in the method, and is not described herein again.
For the system embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing is directed to embodiments of the present invention, and it is understood that various modifications and improvements can be made by those skilled in the art without departing from the spirit of the invention.

Claims (10)

1. A CACHE space allocation method based on a reduced LUN is applied to a storage system, and the method comprises the following steps:
when the thin LUN is created as required, the thin LUN metadata is created, and metadata CACHE space is distributed for the thin LUN metadata from the current CACHE;
creating a reduced LUN physical space, wherein the reduced LUN physical space is used for storing user data, and distributing a user data CACHE space for the reduced LUN physical space according to the number of LUNs contained in the reduced LUN and the remaining free CACHE space in the current CACHE;
when the compact LUN receives a write data request issued by a server, accessing a user data CACHE space allocated for a physical space of the compact LUN, and judging whether a flow model is a sequential flow model in the CACHE space, wherein the flow model is used for describing the write data request issued by the server, and the sequential flow model is used for describing the write data request with continuous corresponding logical addresses and the logical addresses corresponding to the write data requests issued historically;
if yes, writing the data to be written in the data writing request issued by the server into the user data CACHE space;
if not, judging whether the flow model is a pseudo-random flow model or not, wherein the pseudo-random flow model is used for describing that the corresponding logic address and the logic address corresponding to the historically issued write data request are pseudo-random write data requests;
and if the flow model is judged to be the pseudorandom flow model, writing data to be written in a data writing request issued by a server into the user data CACHE space, otherwise, writing the data into the reduced LUN physical space.
2. The method of claim 1, wherein the allocating metadata CACHE space for the reduced LUN metadata from a current CACHE comprises:
and according to the size of the allocated physical space of the reduced LUN metadata, fixedly allocating equivalent metadata CACHE space for the reduced LUN metadata from the current CACHE.
3. The method of claim 1, further comprising:
detecting the metadata CACHE space;
when detecting that the data volume in the metadata CACHE space is larger than or equal to a first threshold value, selecting n data from the metadata CACHE space and refreshing the n data into the physical space allocated to the simplified LUN metadata, wherein the residual data volume of the metadata CACHE space after the n data are selected and refreshed is not larger than a second threshold value;
and releasing the CACHE space occupied by the selected n data in the metadata CACHE space, and returning to detect the metadata CACHE space.
4. The method of claim 1, wherein said determining if the traffic model is a pseudo-random traffic model if no comprises:
if not, judging whether the logic address corresponding to the currently received write data request is continuous with the recorded logic address corresponding to one write data request;
if so, marking the currently received data writing request in a locally preset data writing request record table, and recording the currently received data writing request, wherein a fixed number of data writing requests are recorded in the preset data writing request record table;
and if the marked write data request number accumulated in the history in the current recording list exceeds the preset ratio in the current recording list, judging that the flow model is a pseudo-random flow model, otherwise, judging that the flow model is not the pseudo-random flow model.
5. The method of claim 4, wherein the logging the currently received write data request comprises:
and eliminating the write data request recorded first in the write data requests, and replacing the write data request with the currently received write data request.
6. A reduced LUN based CACHE space allocation system, the system comprising:
a metadata creating unit for creating a thin LUN metadata when creating a thin LUN as required;
the first CACHE space allocation unit allocates metadata CACHE space for the reduced LUN metadata from the current CACHE;
a physical space creating unit, configured to create a reduced LUN physical space, where the reduced LUN physical space is used to store user data;
the second CACHE space allocation unit is used for allocating user data CACHE space for the physical space of the reduced LUN according to the number of the LUNs contained in the reduced LUN and the remaining free CACHE space in the current CACHE;
the first judging unit is used for accessing a user data CACHE space distributed for a physical space of the compact LUN when the compact LUN receives a write data request issued by a server, and judging whether a flow model is a sequential flow model in the CACHE space, wherein the flow model is used for describing the write data request issued by the server, and the sequential flow model is used for describing the write data request of which the corresponding logical address is continuous with the logical address corresponding to the write data request issued historically;
a first data writing unit, configured to write data to be written in a data writing request issued by a server into the user data CACHE space if the data is written;
the second judgment unit is used for judging whether the flow model is a pseudorandom flow model or not if the flow model is not the pseudorandom flow model; the pseudo-random flow model is used for describing that the corresponding logic address and the logic address corresponding to the write data request issued in the history are pseudo-random write data requests;
and the second data writing unit is used for writing data to be written in a data writing request issued by the server into the user data CACHE space if the flow model is judged to be the pseudorandom flow model, and otherwise, writing the data into the reduced LUN physical space.
7. The system of claim 6, wherein the first CACHE space allocation unit is specifically configured to:
and according to the size of the allocated physical space of the reduced LUN metadata, fixedly allocating equivalent metadata CACHE space for the reduced LUN metadata from the current CACHE.
8. The system of claim 6, further comprising:
a space detection unit for detecting the metadata CACHE space;
the data refreshing unit is used for selecting n data from the metadata CACHE space to be refreshed into the physical space allocated to the simplified LUN metadata when detecting that the data volume in the metadata CACHE space is larger than or equal to a first threshold, wherein the residual data volume of the metadata CACHE space after the n data are selected to be refreshed is not larger than a second threshold;
and the space release unit is used for releasing the CACHE space occupied by the selected n data in the metadata CACHE space and returning to detect the metadata CACHE space.
9. The system according to claim 6, wherein the second determination unit comprises:
the address continuity judging subunit is used for judging whether the logic address corresponding to the currently received write data request is continuous with the recorded logic address corresponding to one write data request if the logic address corresponding to the currently received write data request is not continuous with the recorded logic address corresponding to the one write data request;
the marking subunit is used for marking the currently received write data request in a locally preset write data request record table if the current received write data request is the same as the locally preset write data request record table;
the recording subunit is used for recording the currently received data writing requests, wherein a preset data writing request recording table records a fixed number of data writing requests;
and the flow model judging subunit is used for judging that the flow model is a pseudorandom flow model if the marked write data request number accumulated in the current recording list exceeds a preset ratio in the current recording list, and otherwise, judging that the flow model is not the pseudorandom flow model.
10. The system according to claim 9, wherein the recording subunit is specifically configured to:
and eliminating the write data request recorded first in the write data requests, and replacing the write data request with the currently received write data request.
CN201810169054.9A 2018-02-28 2018-02-28 CACHE space distribution method and system based on simplified LUN Active CN110209600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810169054.9A CN110209600B (en) 2018-02-28 2018-02-28 CACHE space distribution method and system based on simplified LUN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810169054.9A CN110209600B (en) 2018-02-28 2018-02-28 CACHE space distribution method and system based on simplified LUN

Publications (2)

Publication Number Publication Date
CN110209600A CN110209600A (en) 2019-09-06
CN110209600B true CN110209600B (en) 2021-05-28

Family

ID=67778722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810169054.9A Active CN110209600B (en) 2018-02-28 2018-02-28 CACHE space distribution method and system based on simplified LUN

Country Status (1)

Country Link
CN (1) CN110209600B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111208942B (en) * 2019-12-25 2023-07-14 曙光信息产业股份有限公司 Distributed storage system and storage method thereof
CN114489508B (en) * 2022-01-26 2023-09-01 重庆紫光华山智安科技有限公司 Data management method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201923A (en) * 2016-07-20 2016-12-07 杭州宏杉科技有限公司 Method for reading and writing data and device
CN106547476A (en) * 2015-09-22 2017-03-29 伊姆西公司 For the method and apparatus of data-storage system
CN107239412A (en) * 2017-06-19 2017-10-10 杭州宏杉科技股份有限公司 Memory space collocation method, method for writing data and storage device based on Thin LUN
US9785572B1 (en) * 2014-09-09 2017-10-10 Radian Memory Systems, Inc. Memory controller with multimodal control over memory dies
CN107615252A (en) * 2015-01-05 2018-01-19 邦存科技有限公司 Metadata management in storage system extending transversely

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8250296B2 (en) * 2004-12-01 2012-08-21 Dell Products L.P. System and method for information handling system memory page mapping optimization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9785572B1 (en) * 2014-09-09 2017-10-10 Radian Memory Systems, Inc. Memory controller with multimodal control over memory dies
CN107615252A (en) * 2015-01-05 2018-01-19 邦存科技有限公司 Metadata management in storage system extending transversely
CN106547476A (en) * 2015-09-22 2017-03-29 伊姆西公司 For the method and apparatus of data-storage system
CN106201923A (en) * 2016-07-20 2016-12-07 杭州宏杉科技有限公司 Method for reading and writing data and device
CN107239412A (en) * 2017-06-19 2017-10-10 杭州宏杉科技股份有限公司 Memory space collocation method, method for writing data and storage device based on Thin LUN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自动精简配置技术中空间回收方法的研究;韩朴杰等;《计算机与现代化》;20121231(第12期);第168-173页、第177页 *

Also Published As

Publication number Publication date
CN110209600A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
US8909887B1 (en) Selective defragmentation based on IO hot spots
JP6613375B2 (en) Profiling cache replacement
US10169232B2 (en) Associative and atomic write-back caching system and method for storage subsystem
CN104115133B (en) For method, system and the equipment of the Data Migration for being combined non-volatile memory device
US9524238B2 (en) Systems and methods for managing cache of a data storage device
CN101135994B (en) Method and apparatus for dividing buffer memory space and buffer memory controller thereof
CN106970765B (en) Data storage method and device
US9367262B2 (en) Assigning a weighting to host quality of service indicators
CN105339910B (en) Virtual NAND capacity extensions in hybrid drive
CN106201335B (en) Storage system
JP2015529368A (en) Storage translation layer
CN109739696B (en) Double-control storage array solid state disk caching acceleration method
CN110209600B (en) CACHE space distribution method and system based on simplified LUN
KR20150062039A (en) Semiconductor device and operating method thereof
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
CN104375955B (en) Cache memory device and its control method
US10452548B2 (en) Preemptive cache writeback with transaction support
CN114327270A (en) Request processing method, device, equipment and readable storage medium
KR101026634B1 (en) A method of data storage for a hybrid flash memory
CN109324980A (en) A kind of L2P table management method, method for reading data, device and equipment
US9699263B1 (en) Automatic read and write acceleration of data accessed by virtual machines
CN106339330A (en) Method and system for flushing caches
CN105359116B (en) Buffer, shared cache management method and controller
CN109739688A (en) Snapshot Resources space management, device, electronic equipment
CN112631518B (en) Data storage method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant