CN111158578A - Storage space management method and device - Google Patents

Storage space management method and device Download PDF

Info

Publication number
CN111158578A
CN111158578A CN201811324331.5A CN201811324331A CN111158578A CN 111158578 A CN111158578 A CN 111158578A CN 201811324331 A CN201811324331 A CN 201811324331A CN 111158578 A CN111158578 A CN 111158578A
Authority
CN
China
Prior art keywords
read
cache
parameter
issuing
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811324331.5A
Other languages
Chinese (zh)
Other versions
CN111158578B (en
Inventor
吴会堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811324331.5A priority Critical patent/CN111158578B/en
Publication of CN111158578A publication Critical patent/CN111158578A/en
Application granted granted Critical
Publication of CN111158578B publication Critical patent/CN111158578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a storage space management method and device, and relates to the technical field of storage. The storage space management method comprises the following steps: when detecting that a distribution unit Block to be read back exists in the secondary read cache space, acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device; and adjusting the issuing strategy of the hot spot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter. The method and the device avoid blind issuing of the read-back instruction, cause huge pressure on the storage device, further avoid influence on normal operation of upper-layer services, and improve use experience.

Description

Storage space management method and device
Technical Field
The invention relates to the technical field of storage, in particular to a storage space management method and device.
Background
A cache memory (i.e., cache) is a small, but high speed memory located between the CPU and main memory, typically comprised of DRAM. Usually, a secondary read Cache space (also called a Cache pool, such as an SSD Cache) is adopted in the storage device to cooperate with the Cache, so as to make up for the defect that the size of the Cache space is limited, and further improve the random read performance of the system.
The size of the Block of the allocation unit of the second-level read Cache space is generally consistent with the size of the Block of the read Cache space in the Cache. When the storage device receives a reading instruction, the Cache is controlled to read corresponding data from the main memory and write the data into a reading Cache space. After Block of the read cache space is exhausted, the storage device starts to fill the secondary read cache space. Specifically, the filling manner is to write the data in the read cache space into the second level read cache space. In this process, since there is a hollow Block in the read cache space (i.e. the size of the data Block stored in the Block is smaller than the size of the space of the Block), at the same time, the size of the data Block of each Block filled into the secondary storage space must be consistent with the size of the space of the Block before the data Block can be successfully filled. Therefore, when the data Block with the size smaller than the Block space in the read cache space is filled into the secondary read cache space, data read-back needs to be immediately performed from the main memory, so that the size of the data Block finally written into the secondary read cache space is consistent with the Block space. This also makes the random read traffic read ahead.
However, when filling the second-level read cache space, a read-back command is issued immediately to complete each Block of data to be written, which is smaller than the Block space. Therefore, the number of random read commands of the disk is increased undoubtedly in a read-back mode without considering the size of the upper-layer service pressure, and the pressure of the disk is increased. Not only affects the performance of the upper layer service (such as the writing and playback of the monitoring service), but also affects the filling speed of the second level read cache space, which is not beneficial to the timeliness of the next playback.
Disclosure of Invention
The present invention is directed to a method and apparatus for managing storage space, so as to improve the above-mentioned problems.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a storage space management method, which is applied to a storage controller in a storage device, where the storage device further includes a secondary read cache space, and the storage space management method includes: when detecting that a distribution unit Block to be read back exists in the secondary read cache space, acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device; and adjusting the issuing strategy of the hot spot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter.
In a second aspect, an embodiment of the present invention provides a storage space management apparatus, which is applied to a storage controller in a storage device, where the storage device further includes a secondary read cache space, and the storage space management apparatus includes: the acquisition module is used for acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval when detecting that the distribution unit Block to be read back exists in the secondary read cache space, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device; and the adjusting module is used for adjusting the issuing strategy of the hotspot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter.
The difference from the prior art is that, in the storage space management method provided in the embodiment of the present invention, when it is detected that an allocation unit Block to be read back exists in the secondary read cache space, a mean evaluation parameter for representing a current traffic pressure of the storage device is obtained in each time interval, and an issuing policy of a hot spot data read-back instruction corresponding to the Block to be read back is adjusted. The method and the device avoid blind underground sending of the read-back instruction, cause huge pressure on the storage device, further avoid the influence on normal operation of the upper-layer service, and improve use experience.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic structural diagram of a storage device according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps of a storage space management method according to an embodiment of the present invention.
Fig. 3 shows another part of a flowchart illustrating steps of a storage space management method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating functional modules of a storage space management apparatus according to an embodiment of the present invention.
Icon: 100-a storage device; 101-a main memory; 102-a communication interface; 103-a memory controller; 104-system memory; 105-a bus; 200-storage space management means; 201-an acquisition module; 202-adjusting module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
For random services, most of the general read cache scheduling and replacement algorithms have no read-ahead function, because the read cache space is limited, the read cache space is occupied by a large amount of data read in advance, and if the read cache space cannot be used by upper-layer services immediately, the replacement time of other commands is undoubtedly increased. The space size of the second level read cache space is larger, for example, the SSDCache can reach more than 1TB, and from the view of the scheduling algorithm of the operating system, the probability of the next access of the data around the accessed data is larger, and if the data is read out in advance and written into the second level read cache space, the data is directly read from the second level read cache space when the data is accessed next time, so that the response time can be obviously shortened, and the reading performance can be improved. Thus, the big principle is used when filling the level two read cache space. That is, if the size of the data Block filled into the secondary read cache space is smaller than the Block space of the allocation unit of the secondary read cache space, the insufficient part is read back from the disk, so as to achieve the effect of pre-reading random service. It should be noted that Block is a minimum allocation unit space of the buffer space, and the size of Block may be customized according to the service requirement, for example, the size of Block may be set to a unit Block of 128k or 512 k.
Generally, when the upper layer service needs to read the storage data, the number of read commands to be executed by the storage device is large. Taking a video stream playback service as an example, a new file is generated every 1G size of a video, in 1G video data, 1 SUPER (for recording version number of block format, camera code), 4 MAIN INDEX (primary time INDEX area) of 64K (each time INDEX in the INDEX area corresponds to one data unit one by one, for recording the earliest I frame group time of the data unit, usually 64K), 512 secondary time INDEX (secondary time INDEX area, where each time INDEX entry corresponds to one I frame group one by one, for recording the start time of the I frame group), and it is assumed that the block size of the read buffer space is 512K. It is understood that the total number of commands for video of 1G is (1 × 1024)/(512) ═ 2048, while the index commands have 1+4+512 ═ 517, which is more than 25%. When video playback is encountered, a read command of 1/4 needs to be read back and sent down. And providing usable data for the upper-layer service, the Cache needs to read corresponding data from the main memory first, write the data into a read Cache space of the Cache, and then provide the required data for the upper-layer service. However, in this process, if the read Cache space of the Cache is used up, the second-level read Cache space is started to be filled, that is, the data block in the read Cache space of the Cache is written into the second-level Cache space, so that the data can be timely extracted when the upper-layer service needs to be called. And the Block of the index data for storage in the read Cache space of the Cache is 512K smaller than the Block due to the unequal size of the index data. When the index data needs to be filled into the second-level read cache space, the index data needs to be read back from the disk immediately. And writing the insufficient partial data into the second-level read cache space after the insufficient partial data is completely filled. Therefore, when a data Block to be written is smaller than the Block space size, a read-back command is immediately issued to complete the data Block, the number of random read commands of the disk is further increased undoubtedly, and the pressure of the disk is increased. Not only affects the performance of the upper layer service (such as the writing and playback of the monitoring service), but also affects the filling speed of the second level read cache space, which is not beneficial to the timeliness of the next playback.
Therefore, embodiments of the present invention provide a method and an apparatus for managing a storage space, so as to improve the above problem.
Referring to fig. 1, a memory device 100 according to an embodiment of the invention is provided. The operating system of the storage device 100 may be, but is not limited to, a Windows system, a Linux system, and the like. The storage device 100 includes a main memory 101, a communication interface 102, a storage controller 103, a system memory 104, and a bus 105, wherein the main memory 101, the communication interface 102, the system memory 104, and the storage controller 103 are connected via the bus 105, and the storage controller 103 can execute an executable module, such as a computer program, stored therein.
The main memory 101 may be a non-volatile memory (non-volatile memory), such as a memory array composed of at least one Disk memory (i.e., Disk in the figure). The communication connection between the storage device 100 and other devices is realized by at least one communication interface 102 (which may be wired or wireless).
The storage controller 103 may also store a program therein, such as the storage space management apparatus 200 shown in fig. 4. The storage space management apparatus 200 includes at least one storage medium that can be stored in the form of software or firmware (firmware). After receiving the execution instruction, the memory controller 103 executes the program to implement the memory space management method disclosed in the above embodiment of the present invention.
The system memory 104 includes a Cache memory Cache and a second level read Cache space. Optionally, the Cache includes a read Cache space and a write Cache space. Optionally, the secondary read Cache space may be an SSD disk (also referred to as an SSD Cache). The SSD disk adopts a FLASH memory (namely a FLASH chip) as a storage medium, does not need magnetic head seeking, has a higher reading speed than a mechanical disk, the common seeking time of 7200-turn mechanical disks is generally 10 milliseconds, and the SSD disk can easily reach 0.1 millisecond or even be nearly 0, so that the high random reading and writing speed is the greatest advantage of a solid-state disk, and the SSD disk is used as a secondary reading cache space.
Bus 105 may be an ISA bus 105, PCI bus 105, or EISA bus 105, among others. Only one bi-directional arrow is shown in fig. 1, but this does not indicate only one bus 105 or one type of bus 105.
First embodiment
Referring to fig. 2, fig. 2 shows a method for managing a storage space according to a preferred embodiment of the invention. The above-described storage space management method may be applied to the storage controller 103 shown in fig. 1. Optionally, the method comprises:
step S101, when it is detected that the allocation unit Block to be read back exists in the secondary read buffer space, at least one type of mean evaluation parameter of the storage device 100 is obtained according to a preset time interval.
In the embodiment of the present invention, the above-mentioned mean evaluation parameter may be used to characterize the current traffic pressure of the storage device 100. Optionally, the mean evaluation parameter may include a response time parameter of the main memory 101, a read Cache replacement parameter of the Cache, and a write Cache occupancy parameter of the Cache.
The blocks in the second-level read cache space comprise filled blocks, unfilled blocks and blocks to be read back. The filled Block refers to a Block whose state information in the second-level read cache space is completely filled, that is, the Block is written with a data Block, and the size of the space occupied by the written data Block is consistent with the size of the space of the Block. The unfilled Block refers to a Block whose state information in the second-level read cache space is empty, that is, data is not written into the Block temporarily, and the Block is in an empty state. The Block to be read back refers to a Block whose state information in the second-level read cache space is not filled, that is, the Block is already written with a data Block, but the space size occupied by the written data Block is smaller than that of the Block. It should be noted that both an unfilled Block and a Block to be read back in the second-level read cache space cannot be used by the upper-layer service, and only the filled Block is valid for the upper-layer service, that is, the filled Block can directly return data to the upper layer after being hit. However, after the Block to be read back acquires the data that fills the empty space of the Block from the main memory 101, the state of the Block is also changed to be filled, so that the Block can be used by the upper layer service conveniently.
Further, a storage space may be opened up in the system memory 104 of the storage device 100, so as to record the status information of each Block in the secondary read cache space in real time. For example, each Block number and corresponding status information may be recorded. And the state of the Block in the second-level read cache space can be changed in time after being changed. The above-mentioned manner of detecting whether there is a Block to be read back in the secondary read cache space may be to determine by querying a storage space in the system memory 104 for recording status information of the Block in the secondary read cache space.
Optionally, the response time parameter of the main memory 101 refers to an average response time of a disk of the main memory 101 to a command. For the storage device 100, the best and the best direct reaction of the system performance is the speed of the command return, i.e. the average response time of the disk. Typically, the Cache does not communicate directly with Disk in main memory 101, but rather communicates with Disk through a RAID array setup. Optionally, in the embodiment of the present invention, the response time of each read/write command is counted by the Cache to determine the response time parameter of the main memory 101. For example, the read-write instruction response time of the Cache is inquired once every 1 second, and m is collected in total1Recording the query result of each time, and recording the read-write instruction response time queried every second as T1,T2…TiIn querying m1Then utilizeFormula T ═ T (T)1*n′1+T2n′2+...+Ti*n′i)/m1The response time parameter of the main memory 101 is calculated. Wherein m is1=n′1+n′2+n′3+...+n′iT represents a response time parameter, n'1Representing a response time T from query to read-write command1Number of times of (n'2、n′3....n′iAnd so on. When the response time parameter of the main memory 101 exceeds a certain value, which indicates that the read-write pressure of the Disk of the main memory 101 is too high, or the performance of the Disk of the main memory 101 is currently reduced due to other factors (such as temperature) and the like, an additional instruction cannot be continuously issued to the Disk, otherwise, the pressure is continuously increased, and even the front-end write service is affected. For example, in video playback, it can be found that if the response time parameter is greater than 1000ms, a playback pause phenomenon occurs.
Optionally, the read Cache replacement parameter of the Cache refers to an average replacement rate of a Block in a read Cache space of the Cache. It should be noted that, the least recently used algorithm (LRU) is adopted in the read Cache space of the current Cache to replace the Block in the read Cache space of the Cache. When the upper layer service continuously needs to read new data into the read Cache space of the Cache, the data which is not used for the longest time in the read Cache is required to be replaced. The LRU is implemented by designing a command queue, where the latest data read from Disk of the main memory 101 enters the head of the queue and the first data read enters the tail of the queue. For example, for monitoring playback, the data currently being played back is the newest data and enters the head of the queue, while the old data played back a few minutes or hours ago enters the tail of the queue. Optionally, in this embodiment, the number of blocks in the read cache space used by the main memory 101 is used as a read quota, and a formula may be used according to the read quota and the number of replaced blocks in the read cache space in the collected unit time: and calculating the read cache replacement rate corresponding to the unit time, wherein in the formula, P represents the read cache replacement rate corresponding to the unit time, and Y represents the replaced blob in the read cache space in the acquired unit timeck number, Z represents read quota. Optionally, the read Cache replacement rate corresponding to a plurality of unit times may be obtained to calculate the read Cache replacement parameter of the Cache. For example, m is continuously acquired and calculated2The corresponding read cache replacement rate in unit time is sequentially marked as P1,P2…PiUsing the formula P ═ P (P)1*n″1+P2*n″2+...+Pin″i)/m2And calculating to obtain the read Cache replacement parameter of the Cache. In the above formula, m2=n″1+n″2+n″3+...+n″iP represents the read Cache replacement parameter of Cache, n ″)1Represents m2Obtaining the corresponding read cache replacement rate in unit time as P1Number of times, n ″)2、n″3....n″iAnd so on. Generally, the read Cache replacement parameter corresponding to the Cache is too high, which indicates that new data is continuously available to replace old data, and at this time, even if the data in the read Cache space is filled into the SSD Cache, there is no meaning, because the probability of being accessed again next time is very low, we can consider that hot data on Disk of the main memory 101 at this time is very little, and need not to read back and fill a hole of a Block to be read back in the secondary Cache space.
Optionally, the write Cache occupancy parameter of the Cache refers to an average occupancy rate of a Block of a write Cache space of the Cache. It should be noted that, for the storage device 100, the writing buffer space is used to utilize the advantage that the writing speed of the writing buffer space is much higher than the Disk writing speed of the main memory 101, and when data that needs to be written into the main memory 101 is obtained from an upper layer service, the data first arrives in the writing buffer space, and then success is returned to the upper layer service. After the written data enters the write buffer space, the stripe is divided according to a certain mapping address, so as to subsequently flush the Disk of the main memory 101. Taking video monitoring service as an example, the addresses of video I frame data are continuous, so that a stripe is easily formed, the write buffer space receives the data and immediately flushes the data to the Disk of the main memory 101, and the index data is not easy to form the stripe because the index data needs to be updated repeatedly, and then the index data continuously stays in the write buffer space for a period of time until the video corresponding to the index corresponds to the videoAfter all writes are finished and no update is performed, the Disk of the main memory 101 is flushed. Blocks in the write cache space that have been flushed with Disk are reused by new write commands, and the stalled blocks are not available for new data during the stall period. Therefore, the number of blocks of write cache space available to the main memory 101 is called a write quota, using the formula: and calculating the write cache occupancy rate acquired at this time when K is Q/X. In the above formula, Q represents the Block number retained in the write Cache space of the Cache, X represents the write quota in the write Cache space, and K represents the write Cache occupancy. Optionally, the write Cache occupancy parameters of the Cache may be obtained by calculating an average value through write Cache occupancy acquired and calculated for multiple times. For example, m is continuously acquired and calculated3Write buffer occupancy rate, sequentially denoted as K1,K2…KiUsing the formula K ═ K (K)1*n″′1+K2n″′2+...+Ki*n″′i)/m3And calculating to obtain the write Cache occupation parameter of the Cache. In the above formula, m3=n″′1+n″′2+n″′3+...+n″′iK represents the write Cache occupancy parameter of Cache, n'1Represents m3Obtaining the write cache occupancy rate K in the next time1Number of times, n'2、n″′3....n″′iAnd so on. Generally, when the pressure of the front-end writing service is large, additional instructions cannot be issued continuously, so as to avoid affecting the writing performance of the front-end service.
It should be noted that the response time parameter, the read cache replacement parameter, and the write cache occupation parameter all adopt corresponding mean values, thereby avoiding randomness of single statistics and improving accuracy of representing the service pressure of the current storage device 100.
And S102, adjusting an issuing strategy of a hotspot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter.
In the embodiment of the present invention, the storage controller 103 may further store a limit value or a maximum value corresponding to the mean evaluation parameter determined according to the upper-layer service requirement of the storage device 100 in advance. Such as response time limit, replacement rate maximum, and occupancy maximum. Taking video monitoring as an example, according to a lot of tests on the service operation, it can be found that the limit value of the response time thereof can be 1000ms, the maximum value of the replacement rate thereof can be 90% and the maximum value of the occupancy rate thereof can be 80%.
In the embodiment of the invention, the issuing strategy comprises a strategy for determining whether to issue the read-back command and a strategy for issuing the number of the read-back commands. Optionally, the service pressure of the current storage device 100 may be determined according to the comparison result between the response time parameter, the read cache replacement parameter, and the write cache occupancy parameter and the corresponding response time limit value, the replacement rate maximum value, and the occupancy rate maximum value, respectively, and the issuing policy of the read-back instruction for reading back the secondary read cache space may be comprehensively evaluated. Specifically, it may be:
when any one of the response time limit value, the replacement rate maximum value and the occupancy rate maximum value exceeds the corresponding response time limit value, the replacement rate maximum value and the occupancy rate maximum value, data can not be read back. That is, when the response time parameter is not lower than the response time limit value, the issuing of the hot spot data read-back instruction is not executed. And when the read cache replacement parameter is not lower than the maximum replacement rate, not executing issuing of the hot spot data read-back instruction. And when the write cache occupation parameter is not lower than the maximum occupancy rate, not executing the issuing of the hotspot data read-back instruction.
And when the response time parameter is lower than the response time limit value, the read cache replacement parameter is lower than the maximum replacement rate, and the write cache occupation parameter is lower than the maximum occupancy rate, issuing the hot spot data read-back instruction.
Optionally, the step of executing the issuing of the hot spot data read-back instruction includes:
(1) and dynamically determining an instruction issuing amount according to the acquired read cache replacement parameter and the write cache occupation parameter, wherein the instruction issuing amount is the maximum issuing amount of the hot spot data read-back instructions in the corresponding time interval.
Further, as an implementation manner for dynamically determining the instruction issue amount, a formula may be used according to the read cache replacement parameter, the write cache occupancy parameter, the maximum replacement rate value, and the maximum occupancy rate value:
N=M*((K′-K)*(P′-P)),
and calculating the corresponding instruction issuing quantity, wherein N represents the instruction issuing quantity, M represents a preset fixed constant, K 'represents the maximum occupancy rate, K represents the write cache occupation parameter, P' represents the maximum replacement rate, and P represents the read cache replacement parameter. The instruction issuing amount is obtained through the calculation mode, on the premise that the response time parameter does not exceed the corresponding response time limit value, any one of the read cache replacement parameter and the write cache occupation parameter exceeds a certain larger threshold value, the read-back command number is triggered to be reduced, and the read-back command number is directly stopped when the corresponding maximum value is exceeded. Therefore, dynamic issuing of the hot spot data read-back instruction is realized, and the service pressure and the performance of the storage device 100 are fully considered. The influence on the main service is avoided, and the use efficiency is improved.
(2) And issuing the hot spot data read-back instruction according to the instruction issuing amount. It can be understood that the instruction issue amount is the maximum issue amount of the hot spot data read-back instructions in the time interval, and therefore, if the number of the current hot spot data read-back instructions is not greater than the instruction issue amount, all the current hot spot data read-back instructions are issued. And if the number of the current hotspot data read-back instructions is larger than the instruction issuing amount, selecting the hotspot data read-back instructions with the number same as the instruction issuing amount, waiting for the next time interval for the hotspot data read-back instructions which are not issued, and determining the issuing strategy according to the newly acquired response time parameter, the read cache replacement parameter and the write cache occupation parameter.
Through the above steps, automatic adjustment according to the busy degree of the storage device 100 can be realized. The method can reduce the pressure of the disk, does not influence the real-time service, further improves the system performance, and does not need human intervention.
Further, as shown in fig. 3, the method for managing a storage space according to an embodiment of the present invention further includes the following steps:
step S201, when the space occupied by the data to be written is smaller than the space size corresponding to the Block of the second-level read cache space, collecting a response time parameter, the read cache replacement parameter and the write cache occupation parameter.
The data to be written may include a data block that needs to be read from the read Cache space of the Cache to fill the second level read Cache space. That is, a data Block called by an upper layer service is waited in a Block of an original read Cache space of a write Cache, but then the Block in the read Cache space of the Cache is occupied, and the Block needs to be moved from the read Cache space of the Cache to a second-level read Cache space for caching to wait for the data Block called by the upper layer service.
In the embodiment of the invention, when the space occupied by the data to be written is smaller than the space size corresponding to the Block of the second-level read cache space, the acquisition starting response time parameter, the read cache replacement parameter and the write cache occupation parameter are acquired. The collecting principle is the same as that of step S101, and is not described herein again.
Step S202, if the response time parameter is not lower than the response time limit value, or the read cache replacement parameter is not lower than the maximum replacement rate, or the write cache occupation parameter is not lower than the maximum occupancy rate, the data to be written is written into a Block of the secondary read cache space. The data to be written does not need to be acquired from the Disk of the main controller to obtain the supplementary data, and the filling efficiency of the secondary read cache space can be effectively improved under the condition that the service pressure of the storage device 100 is high.
Step S203, mark the Block written with the data to be written as the Block to be read back. And updates the Block's state information in system memory 104.
Further, the storage space management method provided in the embodiment of the present invention may further include: when a read command received by the memory controller 103 hits data stored in a Block to be read back, the data corresponding to the read command is read from the main memory 101. The problem that the Block to be read back cannot directly serve the upper layer service is solved, and the upper layer service operated by the storage device 100 can be performed in order.
Second embodiment
Referring to fig. 4, a storage space management apparatus 200 according to an embodiment of the present invention is provided. The storage space management apparatus 200 is applied to the storage controller 103 of the storage device 100. Alternatively, as shown in fig. 4, the storage space management apparatus 200 includes: an obtaining module 201 and an adjusting module 202.
An obtaining module 201, configured to obtain at least one type of mean evaluation parameters of the storage device 100 according to a preset time interval when detecting that there is an allocation unit Block to be read back in the secondary read cache space, where the mean evaluation parameters are used to represent a current service pressure of the storage device 100.
And the adjusting module 202 is configured to adjust an issuing strategy of a hotspot data read-back instruction corresponding to the Block to be read-back according to the mean evaluation parameter.
Further, the mean evaluation parameter includes a response time parameter of the main memory 101, a read Cache replacement parameter of the Cache, and a write Cache occupation parameter of the Cache; the storage controller 103 is pre-stored with a response time limit value, a replacement rate maximum value, and an occupancy rate maximum value determined according to the service requirement of the storage device 100, and the adjusting module 202 is specifically configured to:
and when the response time parameter is not lower than the response time limit value, not executing the issuing of the hot spot data read-back instruction.
And when the read cache replacement parameter is not lower than the maximum replacement rate, not executing issuing of the hot spot data read-back instruction.
And when the write cache occupation parameter is not lower than the maximum occupancy rate, not executing the issuing of the hotspot data read-back instruction.
And when the response time parameter is lower than the response time limit value, the read cache replacement parameter is lower than the maximum replacement rate, and the write cache occupation parameter is lower than the maximum occupancy rate, issuing the hotspot data read-back instruction.
Preferably, the manner for executing, by the adjusting module 202, the issuing of the hot spot data read-back instruction includes: and dynamically determining an instruction issuing amount according to the obtained read cache replacement parameter and the write cache occupation parameter, and issuing the hotspot data read-back instruction according to the instruction issuing amount. It should be noted that the instruction issue amount is the maximum issue amount of the hot spot data read-back instructions in the corresponding time interval.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In summary, embodiments of the present invention provide a method and an apparatus for managing a storage space. The storage space management method and the storage space management device can be applied to a storage controller in storage equipment. The storage device also includes a second level read cache space. The storage space management method comprises the following steps: when detecting that a distribution unit Block to be read back exists in the secondary read cache space, acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device; and adjusting the issuing strategy of the hot spot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter. The method and the device avoid blind issuing of the read-back instruction, cause huge pressure on the storage device, further avoid influence on normal operation of upper-layer services, and improve use experience.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. A storage space management method is applied to a storage controller in a storage device, wherein the storage device further comprises a secondary read cache space, and the storage space management method comprises the following steps:
when detecting that a distribution unit Block to be read back exists in the secondary read cache space, acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device;
and adjusting the issuing strategy of the hot spot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter.
2. The method of claim 1, wherein the storage device further comprises a Cache memory Cache and a main memory, and the mean evaluation parameter comprises a response time parameter of the main memory, a read Cache replacement parameter of the Cache, and a write Cache occupancy parameter of the Cache; the storage controller is pre-stored with a response time limit value, a replacement rate maximum value and an occupancy rate maximum value determined according to the service requirement of the storage device, and the step of adjusting the issuing strategy of the hot spot data read-back instruction corresponding to the Block to be read-back comprises the following steps:
when the response time parameter is not lower than the response time limit value, the issuing of the hot spot data read-back instruction is not executed;
when the read cache replacement parameter is not lower than the maximum replacement rate, the issuing of the hot spot data read-back instruction is not executed;
when the write cache occupation parameter is not lower than the maximum occupancy rate, the issuing of the hotspot data read-back instruction is not executed;
and when the response time parameter is lower than the response time limit value, the read cache replacement parameter is lower than the maximum replacement rate, and the write cache occupation parameter is lower than the maximum occupancy rate, issuing the hotspot data read-back instruction.
3. The method of claim 2, wherein the step of executing the issuing of the hotspot data read-back instruction comprises:
dynamically determining an instruction issuing amount according to the obtained read cache replacement parameter and the write cache occupation parameter;
issuing the hot spot data read-back instruction according to the instruction issuing amount;
and the instruction issuing quantity is the maximum issuing quantity of the hotspot data read-back instructions in the corresponding time interval.
4. The method of claim 3, wherein the step of dynamically determining an instruction issue amount according to the obtained read cache replacement parameter and write cache occupancy parameter comprises:
according to the read cache replacement parameter, the write cache occupation parameter, the maximum replacement rate and the maximum occupancy rate, utilizing a formula:
N=M*((K′-K)*(P′-P)),
and calculating the corresponding instruction issuing quantity, wherein N represents the instruction issuing quantity, M represents a preset fixed constant, K 'represents the maximum occupancy rate, K represents the write cache occupation parameter, P' represents the maximum replacement rate, and P represents the read cache replacement parameter.
5. The method of claim 2, wherein the method further comprises:
when the space occupied by the data to be written is smaller than the space size corresponding to the Block of the secondary read cache space, acquiring the response time parameter, the read cache replacement parameter and the write cache occupation parameter; the data to be written comprises data which needs to be read from the Cache so as to fill the secondary read Cache space;
if the response time parameter is not lower than the response time limit value, or the read cache replacement parameter is not lower than the maximum replacement rate, or the write cache occupation parameter is not lower than the maximum occupancy rate, writing the data to be written into a Block of the secondary read cache space;
and marking the Block written with the data to be written as the Block to be read back.
6. The method of claim 2, wherein the obtaining of the read cache replacement parameter comprises: acquiring a read Cache replacement parameter according to the acquired replacement number of blocks corresponding to the read Cache space of the Cache in unit time and the read quota of the read Cache space;
the method for acquiring the write cache occupation parameter comprises the following steps: and acquiring the write Cache occupation parameter according to the collected Block number retained in the write Cache space of the Cache and the write quota of the write Cache space.
7. The method of claim 2, wherein the method further comprises:
and when the read command received by the storage controller hits data stored in the Block to be read back, reading the data corresponding to the read command from the main memory.
8. A storage space management device is applied to a storage controller in a storage device, wherein the storage device further comprises a second-level read cache space, and the storage space management device comprises:
the acquisition module is used for acquiring at least one type of mean value evaluation parameters of the storage device according to a preset time interval when detecting that the distribution unit Block to be read back exists in the secondary read cache space, wherein the mean value evaluation parameters are used for representing the current service pressure of the storage device;
and the adjusting module is used for adjusting the issuing strategy of the hotspot data read-back instruction corresponding to the Block to be read-back according to the mean value evaluation parameter.
9. The apparatus of claim 8, wherein the storage device further comprises a Cache memory Cache and a main memory, the mean evaluation parameter comprises a response time parameter of the main memory, a read Cache replacement parameter of the Cache, and a write Cache occupancy parameter of the Cache; the storage controller is pre-stored with a response time limit value, a replacement rate maximum value and an occupancy rate maximum value determined according to the service requirement of the storage device, and the adjusting module is specifically configured to:
when the response time parameter is not lower than the response time limit value, the issuing of the hot spot data read-back instruction is not executed;
when the read cache replacement parameter is not lower than the maximum replacement rate, the issuing of the hot spot data read-back instruction is not executed;
when the write cache occupation parameter is not lower than the maximum occupancy rate, the issuing of the hotspot data read-back instruction is not executed;
and when the response time parameter is lower than the response time limit value, the read cache replacement parameter is lower than the maximum replacement rate, and the write cache occupation parameter is lower than the maximum occupancy rate, issuing the hotspot data read-back instruction.
10. The apparatus of claim 9, wherein the manner in which the adjustment module executes the issue of the hot spot data read-back command comprises:
dynamically determining an instruction issuing amount according to the obtained read cache replacement parameter and the write cache occupation parameter;
issuing the hot spot data read-back instruction according to the instruction issuing amount;
and the instruction issuing quantity is the maximum issuing quantity of the hotspot data read-back instructions in the corresponding time interval.
CN201811324331.5A 2018-11-08 2018-11-08 Storage space management method and device Active CN111158578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811324331.5A CN111158578B (en) 2018-11-08 2018-11-08 Storage space management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811324331.5A CN111158578B (en) 2018-11-08 2018-11-08 Storage space management method and device

Publications (2)

Publication Number Publication Date
CN111158578A true CN111158578A (en) 2020-05-15
CN111158578B CN111158578B (en) 2022-09-06

Family

ID=70554811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811324331.5A Active CN111158578B (en) 2018-11-08 2018-11-08 Storage space management method and device

Country Status (1)

Country Link
CN (1) CN111158578B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107503A (en) * 2022-12-26 2023-05-12 长春吉大正元信息技术股份有限公司 Data transmission method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802563A (en) * 1996-07-01 1998-09-01 Sun Microsystems, Inc. Efficient storage of data in computer system with multiple cache levels
US8458402B1 (en) * 2010-08-16 2013-06-04 Symantec Corporation Decision-making system and method for improving operating system level 2 cache performance
CN103246616A (en) * 2013-05-24 2013-08-14 浪潮电子信息产业股份有限公司 Global shared cache replacement method for realizing long-short cycle access frequency
CN104133642A (en) * 2014-07-29 2014-11-05 浙江宇视科技有限公司 SSD Cache filling method and device
CN104978282A (en) * 2014-04-04 2015-10-14 上海芯豪微电子有限公司 Cache system and method
CN106095696A (en) * 2016-07-26 2016-11-09 上海航天测控通信研究所 A kind of based on self adaptation route and the caching device of scheduling strategy
CN107229575A (en) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 The appraisal procedure and device of caching performance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802563A (en) * 1996-07-01 1998-09-01 Sun Microsystems, Inc. Efficient storage of data in computer system with multiple cache levels
US8458402B1 (en) * 2010-08-16 2013-06-04 Symantec Corporation Decision-making system and method for improving operating system level 2 cache performance
CN103246616A (en) * 2013-05-24 2013-08-14 浪潮电子信息产业股份有限公司 Global shared cache replacement method for realizing long-short cycle access frequency
CN104978282A (en) * 2014-04-04 2015-10-14 上海芯豪微电子有限公司 Cache system and method
CN104133642A (en) * 2014-07-29 2014-11-05 浙江宇视科技有限公司 SSD Cache filling method and device
CN107229575A (en) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 The appraisal procedure and device of caching performance
CN106095696A (en) * 2016-07-26 2016-11-09 上海航天测控通信研究所 A kind of based on self adaptation route and the caching device of scheduling strategy

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107503A (en) * 2022-12-26 2023-05-12 长春吉大正元信息技术股份有限公司 Data transmission method and device and electronic equipment

Also Published As

Publication number Publication date
CN111158578B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
EP2939120B1 (en) Priority-based garbage collection for data storage systems
US9842060B1 (en) Cache over-provisioning in a data storage device
JP6870246B2 (en) Storage device and storage control device
JP3810738B2 (en) Adaptive pre-fetching of data on disk
KR101717644B1 (en) Apparatus, system, and method for caching data on a solid-state storage device
US8429351B1 (en) Techniques for determining an amount of data to prefetch
US7822731B1 (en) Techniques for management of information regarding a sequential stream
US8312217B2 (en) Methods and systems for storing data blocks of multi-streams and multi-user applications
CN106547476B (en) Method and apparatus for data storage system
US8667224B1 (en) Techniques for data prefetching
US20180232314A1 (en) Method for storing data by storage device and storage device
JP2002342037A (en) Disk device
US20180018269A1 (en) Limiting access operations in a data storage device
CN106708751A (en) Storage device including multi-partitions for multimode operations, and operation method thereof
JP4186509B2 (en) Disk system and its cache control method
KR20140053309A (en) Cache management including solid state device virtualization
CN107463509B (en) Cache management method, cache controller and computer system
US20220326872A1 (en) Method for selecting a data block to be collected in gc and storage device thereof
CN106201335B (en) Storage system
CN105930282A (en) Data cache method used in NAND FLASH
JP4561168B2 (en) Data processing system and method, and processing program therefor
CN107924291A (en) Storage system
CN110413545B (en) Storage management method, electronic device, and computer program product
CN110377233A (en) SSD reading performance optimization method, device, computer equipment and storage medium
JP7011655B2 (en) Storage controller, storage system, storage controller control method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant