CN113377291A - Data processing method, device, equipment and medium of cache equipment - Google Patents

Data processing method, device, equipment and medium of cache equipment Download PDF

Info

Publication number
CN113377291A
CN113377291A CN202110642174.8A CN202110642174A CN113377291A CN 113377291 A CN113377291 A CN 113377291A CN 202110642174 A CN202110642174 A CN 202110642174A CN 113377291 A CN113377291 A CN 113377291A
Authority
CN
China
Prior art keywords
data
mode
target
bucket
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110642174.8A
Other languages
Chinese (zh)
Other versions
CN113377291B (en
Inventor
张朝潞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN202110642174.8A priority Critical patent/CN113377291B/en
Publication of CN113377291A publication Critical patent/CN113377291A/en
Application granted granted Critical
Publication of CN113377291B publication Critical patent/CN113377291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the disclosure relates to a data processing method, a data processing device, a data processing apparatus and a data processing medium for a cache device, wherein the method comprises the following steps: acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data ratio; determining a current corresponding target mode of the cache device according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on the solid state disk and a second device based on the mechanical hard disk, the first device and the second device comprise storage buckets of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage buckets of different types in different modes is different. In the embodiment of the disclosure, since different types of buckets are divided for the cache device including the solid state disk and the mechanical hard disk, dynamic adjustment of the mode state of the cache device for processing data is realized, the performance of the cache device is greatly improved, and the availability of the cache device is enhanced.

Description

Data processing method, device, equipment and medium of cache equipment
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a data processing method, apparatus, device, and medium for a cache device.
Background
Due to the slow performance of the Hard Disk Drive (HDD), in an actual use scenario, a Solid State Drive (SSD) is generally required to be used as a cache device of the HDD to speed up the performance.
When the solid state disk is full as a cache, the load of the application and write back (writeback) is loaded to the mechanical disk at the same time, resulting in a significant degradation of the overall storage system performance. At present, generally, the data volume of dirty data of a cache disk is obtained, whether write operation is executed or not is controlled according to the data volume of the dirty data, judgment is needed before each write operation, the write operation can be executed only if conditions are met, and when the cache is full, application load is limited according to the write-back rate. However, the above method has problems of additional consumption of processor resources and inability to cope with complicated load conditions.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present disclosure provides a data processing method, apparatus, device and medium for a cache device.
The embodiment of the disclosure provides a data processing method of a cache device, and the method comprises the following steps:
acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data ratio;
determining a current corresponding target mode of the cache device according to the target data;
performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, the first device and the second device comprise buckets of different types, the target mode comprises a plurality of modes, and the allocation proportions of the buckets of different types in different modes are different.
The embodiment of the present disclosure further provides a data processing apparatus of a cache device, where the apparatus includes:
the data acquisition module is used for acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data ratio;
the mode determining module is used for determining a current corresponding target mode of the cache device according to the target data;
a data processing module for performing data processing based on the target mode;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, the first device and the second device comprise buckets of different types, the target mode comprises a plurality of modes, and the allocation proportions of the buckets of different types in different modes are different.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instruction from the memory and executing the instruction to realize the data processing method of the cache device provided by the embodiment of the disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, where a computer program is stored, where the computer program is used to execute the data processing method of the cache device provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data ratio; determining a current corresponding target mode of the cache device according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on the solid state disk and a second device based on the mechanical hard disk, the first device and the second device comprise storage buckets of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage buckets of different types in different modes is different. By adopting the technical scheme, the cache equipment comprising the solid state disk and the mechanical hard disk is divided into the buckets of different types, the current mode can be determined in real time according to the data such as reading, writing and using of the cache equipment, and the data reading and writing are performed based on the distribution proportion of the buckets of different types in the current mode, so that the dynamic adjustment of the mode state of the cache equipment for processing the data is realized, the performance of the cache equipment is greatly improved, the defects of resource consumption and incapability of coping with complex load conditions in the related technology are avoided, and the availability of the cache equipment is enhanced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a data processing method of a cache device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a cache device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a bucket in a caching device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another data processing method of a cache device according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a mode transition provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a B + tree according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a node according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a data processing apparatus of a cache device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Since the performance of a Hard Disk Drive (HDD) is slow, in an actual use scenario, a Solid State Drive (SSD) is usually required to be used as a cache device of the HDD to accelerate the performance of the HDD, for example, a fast SSD is used as the cache device in Linux to accelerate the performance of the HDD to a slow HDD.
When the solid state disk is full as a cache, the load of the application and write back (writeback) is loaded to the mechanical disk at the same time, resulting in a significant degradation of the overall storage system performance. At present, generally, the data volume of dirty data of a cache disk is obtained, whether write operation is executed or not is controlled according to the data volume of the dirty data, judgment is needed before each write operation, the write operation can be executed only if conditions are met, and when the cache is full, application load is limited according to the write-back rate. However, the above method only controls the write operation according to the data amount of the dirty data of the cache disk, and cannot accurately control the write rate of the cache disk according to the dirty refreshing rate of the back-end disk; and the capacity of the cache disk is judged every time of writing operation, and the resources of a Central Processing Unit (CPU) are additionally consumed; complex application loads cannot be handled by only a simple current limiting method.
In order to solve the foregoing problem, embodiments of the present disclosure provide a data processing method for a cache device, which is described below with reference to specific embodiments.
Fig. 1 is a schematic flowchart of a data processing method of a cache device according to an embodiment of the present disclosure, where the method may be executed by a data processing apparatus of the cache device, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method is applied to a cache device, and includes:
and step 101, acquiring target data.
In this embodiment, the cache device may include a first device based on a solid state disk and a second device based on a mechanical hard disk, where the first device and the second device include buckets of different types.
Exemplarily, fig. 2 is a schematic diagram of a cache device provided in an embodiment of the present disclosure, and as shown in fig. 2, the cache device may include a first device and a second device, and the cache device may be a logical cache device, and packages and abstracts the first device based on a solid state disk and the second device based on a mechanical hard disk into one device, where the cache device is virtually generated by an operating system, and implements guidance and offloading of data read/write (Input/Output, I/O) through a block device driver, and finally falls data into the first device or the second device.
A Bucket is understood to be a storage space included in a device, a container of storage objects, which may be equal to the size of a physical sector, with a typical value being 512 bytes. In the embodiment of the present disclosure, both the first device and the second device in the cache device are divided into buckets of fixed sizes, which are used for managing data. The logical space of the cache device in this embodiment is composed of the first device and the second device.
In this embodiment, the cache device may be provided with a plurality of types of buckets to indicate different storage locations and states, and the types of the buckets support dynamic adjustment. Optionally, the buckets of different types include a first bucket located in the first device, a second bucket located in the second device, and a third bucket located in the first device and the second device.
For example, fig. 3 is a schematic diagram of a bucket in a cache device according to an embodiment of the present disclosure, as shown in fig. 3, in this embodiment, three types of buckets may be provided, including a first bucket, a second bucket, and a third bucket, in which a block with a number of 1 is used to characterize the first bucket, a block with a number of 2 is used to characterize the second bucket, a block with a number of 3 is used to characterize the third bucket, a block with an unmarked number may be an idle bucket, and a type of the idle bucket may be dynamically adjusted. The first bucket is arranged in the first device, the data in the first bucket is only stored in the first device, and the recovery is forbidden; the second bucket is arranged in the second device, and the data in the second bucket is only stored in the second device; a third bucket may be provided in the first device and the second device, wherein the data stores the same copy in both the first device and the second device, and the third bucket in the first device supports reclamation.
The target data may be related data affecting data processing performance in the cache device, and may include multiple types of data, where the target data in this embodiment may include read-write data, device usage rate, and dirty data proportion, and the read-write data may be related data for data reading and data writing, for example, the read-write data may include read-write data volume, read-data load proportion, write-data load proportion, and the like; the device usage may be a usage of a first device in the caching device; the dirty data percentage may be a ratio of dirty data to total data in the cache device.
Specifically, the cache device may obtain the target data according to a preset time interval, where the preset time interval may be a fixed quantity value or an incremental value, and may be specifically set according to an actual situation.
And 102, determining a current corresponding target mode of the cache device according to the target data.
The target mode in this embodiment may include an idle mode, a read mode, a write mode, and an equalization mode, and the allocation proportions of different types of buckets in different modes are different.
The idle mode may be a mode in a scenario where the number of data read operations and data write operations as a whole in the cache device is low. The read mode may be a mode of a scenario in which there are more data read operations in the cache device. The write mode may be a mode of a scenario in which there are more data write operations in the cache device. The balanced mode may be a mode of a scenario in which a data read operation and a data write operation in the cache device are balanced.
In this embodiment, determining the current target mode corresponding to the cache device according to the target data may include: if the read-write data quantity of the read-write data in the target data is smaller than the read-write threshold value, determining that the target mode is an idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value; and if the device utilization rate of the first device is greater than the utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is the read mode.
The amount of read and write data may be the total number of data read operations and data write operations. The usage threshold may be set according to actual conditions, for example, the usage threshold may be 60%. The read load ratio is the ratio of the data read operation to the amount of read and write data. The first preset threshold is a read load ratio threshold, and may be set according to actual conditions, for example, the first preset threshold may be 65%.
In this embodiment, after the target data is acquired, the read-write data amount of the read-write data in the target data may be extracted, and the read-write data amount may be compared with the read-write threshold, and if the read-write data amount is smaller than the read-write threshold, the target mode may be determined to be the idle mode. If the amount of read-write data is greater than or equal to the read-write threshold, the device usage rate of the first device may be compared with the usage rate threshold. If the device usage rate of the first device is greater than the usage rate threshold and the read duty cycle is greater than a first preset threshold, the target mode may be determined to be the read mode.
In this embodiment of the present disclosure, determining a current target mode corresponding to the cache device according to the target data may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio in the read-write data is greater than a second preset threshold, and the dirty data ratio is greater than a third preset threshold, determining that the target mode is the write mode. Optionally, determining a current corresponding target mode of the cache device according to the target data may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio and the read load ratio in the read-write data are the same, and the dirty data ratio is greater than a fourth preset threshold, determining that the target mode is the balanced mode.
The write load ratio is the ratio of the data write operation to the amount of read and write data. The second preset threshold is a write duty ratio threshold, and may be set according to an actual situation, and may be the same as or different from the first preset threshold. The third preset threshold and the fourth preset threshold may be two thresholds set for the dirty data percentage, for example, the third preset threshold may be 60%, and the fourth preset threshold may be 50%.
In this embodiment, when the set utilization rate of the first device is greater than the utilization rate threshold, the target mode may be determined according to a write load ratio and a dirty data ratio in the read-write data, and when the write load ratio in the read-write data is greater than a second preset threshold and the dirty data ratio is greater than a third preset threshold, the target mode may be determined to be the write mode; and when the write load ratio and the read load ratio in the read-write data are the same and are both 50%, and the dirty data ratio is greater than a fourth preset threshold, determining that the target mode is the balanced mode.
Step 103, data processing is performed based on the target mode.
The data processing may include the data reading operation and the data writing operation.
In the embodiment of the present disclosure, after the cache device obtains the target mode, data processing may be performed according to the allocation proportion of different types of buckets corresponding to the target mode.
Optionally, when the target mode is the idle mode, the data to be written is written into the first bucket in the first device, and the data in the first device and the second device is read. When the target device is in an idle mode, due to the fact that the read-write data volume is low, the allocation proportion of different types of storage buckets in the cache device is not limited, and data to be written can be directly written into a first storage bucket in the first device; if the data included in the bucket in the first device can be directly used for performing the data reading operation, otherwise, the data can be read from the second device and the read data is written into a free bucket in the first device, and the type of the free bucket is changed into a third bucket.
Optionally, when the target mode is the read mode, setting the duty ratio of the third bucket to be greater than the first duty ratio threshold; if the first bucket available in the first device exists, writing the data to be written into the first bucket available; otherwise, after recovering the occupied third bucket in the first device, writing the data to be written.
When the target mode is a read mode, since there are fewer data write operations, the occupancy of the third bucket may be set to be greater than the first occupancy threshold, and the sum of the occupancy of the first bucket and the second bucket may be less than or equal to the first occupancy threshold, which is set to be smaller. The first duty ratio threshold may be a larger duty ratio value, for example the first duty ratio threshold may be 80%. In a read mode, if a data write request is received, data to be written may be written into an available first bucket of a first device; if there is no first bucket available in the first device, a third bucket occupied in the first device may be reclaimed using a reclamation algorithm, and then data to be written may be written for a reclaimed free bucket whose type is transformed into the first bucket. The above recycling algorithm may be set according to actual conditions, for example, the recycling algorithm may adopt a Least Recently Used (LRU) algorithm.
Optionally, when the target mode is the write mode, setting the duty ratio of the first bucket to be greater than a second duty ratio threshold; and if the occupation ratio of the first storage bucket is larger than a third occupation ratio threshold value, writing the data to be written into the second device after the data in the first storage bucket is written back to the second device, wherein the third occupation ratio threshold value is larger than the second occupation ratio threshold value. The second and third occupancy thresholds are both the occupancy values for the first bucket, except that the third occupancy threshold is greater than the second occupancy threshold, and the specific values may be set according to actual conditions.
When the target mode is a write mode, since there are fewer data read operations, the occupancy of the first bucket may be set to be greater than the second occupancy threshold and the write rate may be controlled according to the number of buckets available in the first device. In the write mode, as data write operation is performed, the number of first buckets is increased, the duty ratio of the first buckets may be compared with a third duty ratio threshold, if the duty ratio of the first buckets is greater than the third duty ratio threshold, part of data of the first buckets may be written back to the second device, after the write-back is completed, the type of the first buckets is adjusted to a third bucket, and then the third buckets may be recycled, that is, the type of the third buckets is adjusted to an idle bucket, data to be written is written into the idle bucket, and the type of the idle bucket is changed into the first bucket.
Optionally, when the target mode is the balanced mode, the number of the first buckets in the first device is set to be greater than the number of the third buckets.
When the target mode is the balanced mode, because the ratio of the data reading operation and the data writing operation is close, based on an application angle, the data writing operation generally needs to read and modify data first and then write new data, and therefore, the buckets in the first device can be preferentially partitioned into the first buckets in the balanced mode, that is, the number of the first buckets is set to be greater than that of the third buckets, so as to preferentially satisfy the data writing operation. The number of the first buckets is increased along with the execution of the data writing operation, a part of data of the first buckets can be written back to the second device, after the write-back is completed, the type of the first buckets is adjusted to be a third bucket, and then the third buckets can be recycled, namely, the type of the third buckets is adjusted to be an idle bucket, the data to be written is written into the idle bucket, and the type of the idle bucket is changed to be the first bucket, and the process is circulated. The dynamic adjustment of the allocation proportion of the storage bucket can accelerate the data writing performance and inhibit the reading performance, further controls the quantity of data writing operations of the application program from another layer, and avoids the influence of the consumption of the first storage bucket in the first device on the normal operation of the corresponding program.
In the above scheme, the first device and the second device of the cache device set three different types of buckets for performing data processing, and dynamically adjust the allocation proportions of the different types of buckets in different modes, so that the cache device can achieve optimal data read-write performance, and the availability of the cache device is enhanced.
In this embodiment of the present disclosure, the cache device may record positions of the buckets in the first device and the second device by using a B + tree index, and a structure of a record in a node of the B + tree is ckey.
Because the storage capacity of the cache device is large and the number of partitioned buckets is large, in this embodiment, a B + tree may be used to manage the buckets, a B + tree index is used to record the positions of the buckets in the first device and the second device, that is, a mapping relationship, each B + tree node corresponds to one bucket, and the recording structure is ckey.
By the above mentioned ckey, the buckets of the first device and the second device in the cache device can be managed simultaneously, the index is not required to be established for both the devices, and the search performance is greatly increased. And the first device can be used when the second device is full through the ckey, that is, all the buckets of the second device are set as the second buckets, and all the buckets of the first device are the first buckets, so that the influence of full disks on the normal operation of the application is avoided.
According to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data ratio; determining a current corresponding target mode of the cache device according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on the solid state disk and a second device based on the mechanical hard disk, the first device and the second device comprise storage buckets of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage buckets of different types in different modes is different. By adopting the technical scheme, the cache equipment comprising the solid state disk and the mechanical hard disk is divided into the buckets of different types, the current mode can be determined in real time according to the data such as reading, writing and using of the cache equipment, and the data reading and writing are performed based on the distribution proportion of the buckets of different types in the current mode, so that the dynamic adjustment of the mode state of the cache equipment for processing the data is realized, the performance of the cache equipment is greatly improved, the defects of resource consumption and incapability of coping with complex load conditions in the related technology are avoided, and the availability of the cache equipment is enhanced.
Fig. 4 is a schematic flow chart of another data processing method of a cache device according to an embodiment of the present disclosure, and this embodiment further optimizes the data processing method of the cache device based on the foregoing embodiment. As shown in fig. 4, the method includes:
step 201, target data is obtained.
The target data comprises read-write data, equipment utilization rate and dirty data ratio.
Step 202, determining a current corresponding target mode of the cache device according to the target data.
The cache device comprises a first device based on the solid state disk and a second device based on the mechanical hard disk, the first device and the second device comprise storage buckets of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage buckets of different types in different modes is different. Optionally, the buckets of different types include a first bucket, a second bucket, and a third bucket, where the first bucket is located in the first device, the second bucket is located in the second device, and the third bucket is located in the first device and the second device, and the types of the buckets support dynamic adjustment.
The target mode may include an idle mode, a read mode, a write mode, and an equalization mode.
Optionally, determining a current corresponding target mode of the cache device according to the target data may include: if the read-write data quantity of the read-write data in the target data is smaller than the read-write threshold value, determining that the target mode is an idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value; and if the device utilization rate of the first device is greater than the utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is the read mode.
Optionally, determining a current corresponding target mode of the cache device according to the target data may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio in the read-write data is greater than a second preset threshold, and the dirty data ratio is greater than a third preset threshold, determining that the target mode is the write mode.
Optionally, determining a current corresponding target mode of the cache device according to the target data may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio and the read load ratio in the read-write data are the same, and the dirty data ratio is greater than a fourth preset threshold, determining that the target mode is the balanced mode.
After step 202, step 203, step 204-step 205, step 206-step 207, or step 208 may be executed, which may be determined according to actual situations.
Step 203, when the target mode is the idle mode, writing the data to be written into the first bucket in the first device, and reading the data in the first device and the second device.
And step 204, when the target mode is the read mode, setting the occupation ratio of the third bucket to be larger than the first occupation ratio threshold value.
Step 205, if there is an available first bucket in the first device, writing the data to be written into the available first bucket; otherwise, after recovering the occupied third bucket in the first device, writing the data to be written.
Step 206, when the target mode is the write mode, setting the ratio of the first bucket to be larger than the second ratio threshold.
And step 207, if the occupation ratio of the first storage bucket is greater than a third occupation ratio threshold value, writing the data to be written into the second device after the data in the first storage bucket is written back to the second device, wherein the third occupation ratio threshold value is greater than the second occupation ratio threshold value.
Step 208, when the target mode is the balanced mode, setting the number of the first buckets in the first device to be larger than the number of the third buckets.
Illustratively, fig. 5 is a schematic diagram of a mode transition provided by an embodiment of the present disclosure. As shown in fig. 5, the target mode of the cache device may be determined and dynamically adjusted according to the load condition and the device usage data within the preset time, and it is shown in the figure that the idle mode may be dynamically switched with the read mode, the write mode, and the balance mode, respectively, and data read operation and data write operation are performed based on the switched modes. It is understood that dynamic switching (not shown in the figure) among the read mode, the write mode and the equalization mode can also be realized according to actual situations.
In this embodiment of the present disclosure, the cache device may record positions of the buckets in the first device and the second device by using a B + tree index, and a structure of a record in a node of the B + tree is ckey.
For example, fig. 6 is a schematic diagram of a B + tree provided in the embodiment of the present disclosure, where a B + tree index is used to record positions, that is, mapping relationships, of buckets in a first device and a second device, each B + tree node is a group of already ordered buckets, corresponds to one bucket, and a record structure is a bucket.
Exemplarily, fig. 7 is a schematic diagram of a node provided in an embodiment of the present disclosure. As shown in fig. 7, the record ckey of the B + tree node may include a Logical Block Address (LBA), a first device identifier, a second device identifier, a first device offset, a second device offset, a generation number, and a type identifier. The LBA is used to indicate the number of the bucket in the cache device and indicate a logical position, based on which the B + tree also performs sorting on the ckey in the node, and the largest LBA of the node is used as a value of the node for sorting in the B + tree. The first device identifier may be represented by a CID, and is used to represent an identifier of the first device, where the number of the first devices in the scheme may be multiple. The second device identifier may be represented by a BID, and is used to represent an identifier of the second device, where the number of the second devices in the solution may be multiple. The first device offset may be characterized by Coffset, and is used to indicate that a bucket of the cache device corresponds to a bucket number of the first device, and a specific position may be found by combining CID. The second device offset may be represented by Boffset, and is used to indicate that a bucket of the cache device corresponds to a bucket number of the second device, and a specific position may be found by combining BID. The generation number can be represented by Gen and is used for representing the generation number of ckey, because a B + tree exists in a memory and needs to be persisted to a disk, the performance loss of the B + tree in the disk is great by adjusting, and the generation number is used for modifying the B + tree of the disk based on a log. The Type identifier may be characterized by a Type, and is used to indicate a Type of the bucket, and specifically may include the first bucket, the second bucket, and the third bucket.
By the above mentioned ckey, the buckets of the first device and the second device in the cache device can be managed simultaneously, the index is not required to be established for both the devices, and the search performance is greatly increased. And the first device can be used when the second device is full through the ckey, that is, all the buckets of the second device are set as the second buckets, and all the buckets of the first device are the first buckets, so that the influence of full disks on the normal operation of the application is avoided.
In the scheme, the first device based on the solid state disk and the second device based on the mechanical hard disk in the cache device can be uniformly divided into storage barrels with fixed sizes, one B + tree is adopted to manage the storage barrels of the two devices, the value in the ckey is dynamically adjusted according to the real-time read-write condition and the device use condition, different load mode algorithms are used, the problem that the overall performance of the cache device is low due to poor read-write flow control in the related technology is solved by adopting the dynamic flexible algorithm, the performance maximization can be achieved, the problems that the traditional cache device is easy to fill and cannot increase the overall storage space are solved, and the usability is increased.
According to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data ratio; determining a current corresponding target mode of the cache device according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on the solid state disk and a second device based on the mechanical hard disk, the first device and the second device comprise storage buckets of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage buckets of different types in different modes is different. By adopting the technical scheme, the cache equipment comprising the solid state disk and the mechanical hard disk is divided into the buckets of different types, the current mode can be determined in real time according to the data such as reading, writing and using of the cache equipment, and the data reading and writing are performed based on the distribution proportion of the buckets of different types in the current mode, so that the dynamic adjustment of the mode state of the cache equipment for processing the data is realized, the performance of the cache equipment is greatly improved, the defects of resource consumption and incapability of coping with complex load conditions in the related technology are avoided, and the availability of the cache equipment is enhanced.
Fig. 8 is a schematic structural diagram of a data processing apparatus of a cache device according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device.
As shown in fig. 8, the apparatus includes:
a data obtaining module 301, configured to obtain target data, where the target data includes read-write data, a device usage rate, and a dirty data ratio;
a mode determining module 302, configured to determine, according to the target data, a target mode currently corresponding to the cache device;
a data processing module 303 configured to perform data processing based on the target mode;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, the first device and the second device comprise buckets of different types, the target mode comprises a plurality of modes, and the allocation proportions of the buckets of different types in different modes are different.
Optionally, the target mode includes an idle mode, a read mode, a write mode, and an equalization mode.
Optionally, the mode determining module 302 is specifically configured to:
if the read-write data volume of the read-write data in the target data is smaller than a read-write threshold, determining that the target mode is the idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value;
and if the device utilization rate of the first device is greater than a utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is the read mode.
Optionally, the mode determining module 302 is specifically configured to:
and if the device utilization rate of the first device is greater than a utilization rate threshold, the write load ratio in the read-write data is greater than a second preset threshold, and the dirty data ratio is greater than a third preset threshold, determining that the target mode is the write mode.
Optionally, the mode determining module 302 is specifically configured to:
and if the device utilization rate of the first device is greater than a utilization rate threshold, the write load proportion and the read load proportion in the read-write data are the same, and the dirty data proportion is greater than a fourth preset threshold, determining that the target mode is the balanced mode.
Optionally, the buckets of different types include a first bucket, a second bucket, and a third bucket, where the first bucket is located in the first device, the second bucket is located in the second device, and the third bucket is located in the first device and the second device, and the types of the buckets support dynamic adjustment.
Optionally, the data processing module 303 is specifically configured to:
and when the target mode is the idle mode, writing data to be written into a first bucket in the first device, and reading the data in the first device and the second device.
Optionally, the data processing module 303 is specifically configured to:
when the target mode is the read mode, setting the occupation ratio of the third bucket to be larger than a first occupation ratio threshold value;
if an available first bucket exists in the first device, writing data to be written into the available first bucket; otherwise, after recovering the occupied third bucket in the first device, writing the data to be written.
Optionally, the data processing module 303 is specifically configured to:
when the target mode is the write mode, setting the occupation ratio of the first storage bucket to be larger than a second occupation ratio threshold value;
and if the occupation ratio of the first bucket is larger than a third occupation ratio threshold value, writing data to be written after the data in the first bucket is written back to the second device, wherein the third occupation ratio threshold value is larger than the second occupation ratio threshold value.
Optionally, the data processing module 303 is specifically configured to:
when the target mode is the balanced mode, setting the number of the first buckets in the first device to be larger than the number of the third buckets.
Optionally, the cache device records positions of buckets in the first device and the second device by using a B + tree index, and a structure recorded in the B + tree node is ckey.
The data processing device of the cache device provided by the embodiment of the disclosure can execute the data processing method of the cache device provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 9, the electronic device 400 includes one or more processors 401 and memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 400 to perform desired functions.
Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 401 to implement the data processing method of the cache device of the embodiment of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 403 may also include, for example, a keyboard, a mouse, and the like.
The output device 404 may output various information to the outside, including the determined distance information, direction information, and the like. The output devices 404 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 400 relevant to the present disclosure are shown in fig. 9, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 400 may include any other suitable components depending on the particular application.
In addition to the above methods and apparatuses, embodiments of the present disclosure may also be a computer program product including computer program instructions that, when executed by a processor, cause the processor to perform the data processing method of the cache apparatus provided by the embodiments of the present disclosure.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium on which computer program instructions are stored, which, when executed by a processor, cause the processor to execute the data processing method of the cache device provided by the embodiments of the present disclosure.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A data processing method of a cache device is characterized by comprising the following steps:
acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data ratio;
determining a current corresponding target mode of the cache device according to the target data;
performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, the first device and the second device comprise buckets of different types, the target mode comprises a plurality of modes, and the allocation proportions of the buckets of different types in different modes are different.
2. The method of claim 1, wherein the target mode comprises an idle mode, a read mode, a write mode, and an equalization mode.
3. The method of claim 2, wherein determining the current target mode corresponding to the cache device according to the target data comprises:
if the read-write data volume of the read-write data in the target data is smaller than a read-write threshold, determining that the target mode is the idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value;
and if the device utilization rate of the first device is greater than a utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is the read mode.
4. The method of claim 3, wherein determining the current target mode corresponding to the cache device according to the target data comprises:
and if the device utilization rate of the first device is greater than a utilization rate threshold, the write load ratio in the read-write data is greater than a second preset threshold, and the dirty data ratio is greater than a third preset threshold, determining that the target mode is the write mode.
5. The method of claim 3, wherein determining the current target mode corresponding to the cache device according to the target data comprises:
and if the device utilization rate of the first device is greater than a utilization rate threshold, the write load proportion and the read load proportion in the read-write data are the same, and the dirty data proportion is greater than a fourth preset threshold, determining that the target mode is the balanced mode.
6. The method of claim 2, wherein the different types of buckets comprise a first bucket, a second bucket, and a third bucket, wherein the first bucket is located in the first device, wherein the second bucket is located in the second device, wherein the third bucket is located in the first device and the second device, and wherein the types of buckets support dynamic adjustment.
7. The method of claim 6, wherein performing data processing based on the target pattern comprises:
and when the target mode is the idle mode, writing data to be written into a first bucket in the first device, and reading the data in the first device and the second device.
8. The method of claim 6, wherein performing data processing based on the target pattern comprises:
when the target mode is the read mode, setting the occupation ratio of the third bucket to be larger than a first occupation ratio threshold value;
if an available first bucket exists in the first device, writing data to be written into the available first bucket; otherwise, after recovering the occupied third bucket in the first device, writing the data to be written.
9. The method of claim 6, wherein performing data processing based on the target pattern comprises:
when the target mode is the write mode, setting the occupation ratio of the first storage bucket to be larger than a second occupation ratio threshold value;
and if the occupation ratio of the first bucket is larger than a third occupation ratio threshold value, writing data to be written after the data in the first bucket is written back to the second device, wherein the third occupation ratio threshold value is larger than the second occupation ratio threshold value.
10. The method of claim 6, wherein performing data processing based on the target pattern comprises:
when the target mode is the balanced mode, setting the number of the first buckets in the first device to be larger than the number of the third buckets.
11. The method of claim 1, wherein the caching device records the locations of buckets in the first device and the second device using a B + tree index, and wherein the structure of the records in the B + tree node is ckey.
12. A data processing apparatus of a cache device, comprising:
the data acquisition module is used for acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data ratio;
the mode determining module is used for determining a current corresponding target mode of the cache device according to the target data;
a data processing module for performing data processing based on the target mode;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, the first device and the second device comprise buckets of different types, the target mode comprises a plurality of modes, and the allocation proportions of the buckets of different types in different modes are different.
13. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the data processing method of the cache device according to any one of claims 1 to 11.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the data processing method of the cache device according to any one of claims 1 to 11.
CN202110642174.8A 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment Active CN113377291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642174.8A CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642174.8A CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Publications (2)

Publication Number Publication Date
CN113377291A true CN113377291A (en) 2021-09-10
CN113377291B CN113377291B (en) 2023-07-04

Family

ID=77573182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642174.8A Active CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Country Status (1)

Country Link
CN (1) CN113377291B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535096A (en) * 2021-09-16 2021-10-22 深圳创新科技术有限公司 Virtual NVMe solid-state drive storage construction method and device
CN113805812A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN114356213A (en) * 2021-11-29 2022-04-15 重庆邮电大学 Parallel space management method for NVM wear balance under NUMA architecture
CN115826882A (en) * 2023-02-15 2023-03-21 苏州浪潮智能科技有限公司 Storage method, device, equipment and storage medium
CN116450054A (en) * 2023-06-16 2023-07-18 成都泛联智存科技有限公司 IO request processing method, device, host and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902474A (en) * 2014-04-11 2014-07-02 华中科技大学 Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN106502592A (en) * 2016-10-26 2017-03-15 郑州云海信息技术有限公司 Solid state hard disc caching block recovery method and system
CN107015763A (en) * 2017-03-03 2017-08-04 北京中存超为科技有限公司 Mix SSD management methods and device in storage system
CN107463424A (en) * 2016-06-02 2017-12-12 北京金山云网络技术有限公司 A kind of virtual machine migration method and device
CN110502188A (en) * 2019-08-01 2019-11-26 苏州浪潮智能科技有限公司 A kind of date storage method and device based on data base read-write performance
CN111124304A (en) * 2019-12-19 2020-05-08 北京浪潮数据技术有限公司 Data migration method and device, electronic equipment and storage medium
CN111209253A (en) * 2019-12-30 2020-05-29 河南创新科信息技术有限公司 Distributed storage equipment performance improving method and device and distributed storage equipment
CN112130769A (en) * 2020-09-18 2020-12-25 苏州浪潮智能科技有限公司 Mechanical hard disk data processing method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902474A (en) * 2014-04-11 2014-07-02 华中科技大学 Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN107463424A (en) * 2016-06-02 2017-12-12 北京金山云网络技术有限公司 A kind of virtual machine migration method and device
CN106502592A (en) * 2016-10-26 2017-03-15 郑州云海信息技术有限公司 Solid state hard disc caching block recovery method and system
CN107015763A (en) * 2017-03-03 2017-08-04 北京中存超为科技有限公司 Mix SSD management methods and device in storage system
CN110502188A (en) * 2019-08-01 2019-11-26 苏州浪潮智能科技有限公司 A kind of date storage method and device based on data base read-write performance
CN111124304A (en) * 2019-12-19 2020-05-08 北京浪潮数据技术有限公司 Data migration method and device, electronic equipment and storage medium
CN111209253A (en) * 2019-12-30 2020-05-29 河南创新科信息技术有限公司 Distributed storage equipment performance improving method and device and distributed storage equipment
CN112130769A (en) * 2020-09-18 2020-12-25 苏州浪潮智能科技有限公司 Mechanical hard disk data processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈震;刘文洁;张晓;卜海龙;: "基于磁盘和固态硬盘的混合存储系统研究综述", 计算机应用, no. 05 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535096A (en) * 2021-09-16 2021-10-22 深圳创新科技术有限公司 Virtual NVMe solid-state drive storage construction method and device
CN113535096B (en) * 2021-09-16 2022-01-11 深圳创新科技术有限公司 Virtual NVMe solid-state drive storage construction method and device
CN113805812A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN113805812B (en) * 2021-09-22 2024-03-05 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN114356213A (en) * 2021-11-29 2022-04-15 重庆邮电大学 Parallel space management method for NVM wear balance under NUMA architecture
CN115826882A (en) * 2023-02-15 2023-03-21 苏州浪潮智能科技有限公司 Storage method, device, equipment and storage medium
CN116450054A (en) * 2023-06-16 2023-07-18 成都泛联智存科技有限公司 IO request processing method, device, host and computer readable storage medium
CN116450054B (en) * 2023-06-16 2023-09-26 成都泛联智存科技有限公司 IO request processing method, device, host and computer readable storage medium

Also Published As

Publication number Publication date
CN113377291B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN113377291B (en) Data processing method, device, equipment and medium of cache equipment
US11068409B2 (en) Method and system for user-space storage I/O stack with user-space flash translation layer
KR102510384B1 (en) Apparatus, system and method for caching compressed data background
US8521986B2 (en) Allocating storage memory based on future file size or use estimates
US9146688B2 (en) Advanced groomer for storage array
KR102093523B1 (en) Working set swapping using a sequentially ordered swap file
US8161240B2 (en) Cache management
US9058212B2 (en) Combining memory pages having identical content
US8572325B2 (en) Dynamic adjustment of read/write ratio of a disk cache
CN108628772A (en) Device and method for managing the mapping data in data storage device
US10430329B2 (en) Quality of service aware storage class memory/NAND flash hybrid solid state drive
CN105095116A (en) Cache replacing method, cache controller and processor
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
JP2005293205A (en) Storage control device, control method, and control program
JP3444346B2 (en) Virtual memory management method
CN114327272B (en) Data processing method, solid state disk controller and solid state disk
US20130173855A1 (en) Method of operating storage device including volatile memory and nonvolatile memory
US10891239B2 (en) Method and system for operating NAND flash physical space to extend memory capacity
US8364893B2 (en) RAID apparatus, controller of RAID apparatus and write-back control method of the RAID apparatus
CN108984117B (en) Data reading and writing method, medium and equipment
JP6919277B2 (en) Storage systems, storage management devices, storage management methods, and programs
JP4792065B2 (en) Data storage method
US20210263648A1 (en) Method for managing performance of logical disk and storage array
CN110851273A (en) Program processing method based on hybrid memory and device based on hybrid memory
CN116820861B (en) Method and device for testing enterprise-level solid state disk garbage collection mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant