CN113377291B - Data processing method, device, equipment and medium of cache equipment - Google Patents

Data processing method, device, equipment and medium of cache equipment Download PDF

Info

Publication number
CN113377291B
CN113377291B CN202110642174.8A CN202110642174A CN113377291B CN 113377291 B CN113377291 B CN 113377291B CN 202110642174 A CN202110642174 A CN 202110642174A CN 113377291 B CN113377291 B CN 113377291B
Authority
CN
China
Prior art keywords
data
mode
target
read
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110642174.8A
Other languages
Chinese (zh)
Other versions
CN113377291A (en
Inventor
张朝潞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN202110642174.8A priority Critical patent/CN113377291B/en
Publication of CN113377291A publication Critical patent/CN113377291A/en
Application granted granted Critical
Publication of CN113377291B publication Critical patent/CN113377291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the disclosure relates to a data processing method, a device, equipment and a medium of a cache device, wherein the method comprises the following steps: acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data duty ratio; determining a current corresponding target mode of the cache equipment according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution ratio of the storage barrels of different types in the different modes is different. In the embodiment of the disclosure, as the buffer storage equipment comprising the solid state disk and the mechanical hard disk is divided into different types of storage barrels, the dynamic adjustment of the mode state of the buffer storage equipment for processing data is realized, the performance of the buffer storage equipment is greatly improved, and the availability of the buffer storage equipment is enhanced.

Description

Data processing method, device, equipment and medium of cache equipment
Technical Field
The disclosure relates to the field of communication technologies, and in particular, to a data processing method, device, equipment and medium for a cache device.
Background
Since the performance of a Hard Disk Drive (HDD) is slow, in a practical use scenario, a solid state Disk (Solid State Drive, SSD) is generally required to be used as a buffer device of the HDD to accelerate the performance.
When the solid state disk is fully written as a cache, loads of application and write back (writeback) are simultaneously loaded to the mechanical disk, so that the performance of the whole storage system is greatly reduced. At present, generally, by acquiring the data volume of dirty data of a cache disk and controlling whether to execute writing operation according to the data volume of the dirty data, judgment needs to be performed before each writing operation, the writing operation is executed only if the condition is met, and when the situation that the cache is full occurs, the application load is limited according to the speed of write-back. However, the above method has a problem that the processor resource is additionally consumed and the complex load situation cannot be handled.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a data processing method, apparatus, device and medium of a cache device.
The embodiment of the disclosure provides a data processing method of a cache device, which comprises the following steps:
Acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data duty ratio;
determining a target mode corresponding to the cache equipment currently according to the target data;
performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage barrels of different types in different modes is different.
The embodiment of the disclosure also provides a data processing device of the cache device, which comprises:
the data acquisition module is used for acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data duty ratio;
the mode determining module is used for determining a target mode corresponding to the caching device currently according to the target data;
a data processing module for performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage barrels of different types in different modes is different.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instruction from the memory, and execute the instruction to implement a data processing method of a cache device according to an embodiment of the present disclosure.
The embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program for executing the data processing method of the cache device as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data duty ratio; determining a current corresponding target mode of the cache equipment according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution ratio of the storage barrels of different types in the different modes is different. By adopting the technical scheme, the buffer equipment comprising the solid state disk and the mechanical hard disk is divided into different types of storage barrels, the current mode can be determined in real time according to the data such as the read-write, the use and the like of the buffer equipment, and the data read-write is executed based on the distribution proportion of the storage barrels of different types under the current mode, so that the dynamic adjustment of the mode state of the buffer equipment for processing the data is realized, the performance of the buffer equipment is greatly improved, the defects that resources are consumed and complex load conditions cannot be handled in the related technology are avoided, and the availability of the buffer equipment is enhanced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flow chart of a data processing method of a cache device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a cache device according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a bucket in a cache device according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating another data processing method of a cache device according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a mode transition provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a B+ tree provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a node according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a data processing apparatus of a cache device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Because of the slower performance of the mechanical Hard Disk (HDD), in the practical use scenario, it is generally required to use the solid state Disk (Solid State Drive, SSD) as a cache device of the mechanical Hard Disk to accelerate the performance, for example Bcache, flashcache in Linux, which is to use a fast solid state Disk to accelerate the performance of the mechanical Hard Disk as a slow mechanical Disk.
When the solid state disk is fully written as a cache, loads of application and write back (writeback) are simultaneously loaded to the mechanical disk, so that the performance of the whole storage system is greatly reduced. At present, generally, by acquiring the data volume of dirty data of a cache disk and controlling whether to execute writing operation according to the data volume of the dirty data, judgment needs to be performed before each writing operation, the writing operation is executed only if the condition is met, and when the situation that the cache is full occurs, the application load is limited according to the speed of write-back. However, the above method only controls the writing operation according to the data quantity of the dirty data of the cache disk, and cannot accurately control the writing rate of the cache disk according to the dirty brushing rate of the back-end disk; and the capacity judgment of the cache disk is required to be carried out in each write operation, and the resources of a central processing unit (Central Processing Unit, CPU) are additionally consumed; complex application loads cannot be handled by simple current limiting methods alone.
In order to solve the above-mentioned problems, an embodiment of the present disclosure provides a data processing method of a cache device, and the method is described below with reference to specific embodiments.
Fig. 1 is a flow chart of a data processing method of a cache device according to an embodiment of the present disclosure, where the method may be performed by a data processing apparatus of the cache device, where the apparatus may be implemented by software and/or hardware, and may generally be integrated in an electronic device. As shown in fig. 1, the method is applied to a cache device, and includes:
and 101, acquiring target data.
In this embodiment, the cache device may include a first device based on a solid state hard disk and a second device based on a mechanical hard disk, where the first device and the second device include different types of buckets.
Fig. 2 is a schematic diagram of a cache device provided by an embodiment of the present disclosure, where, as shown in fig. 2, the cache device may include a first device and a second device, and the cache device may be a logic cache device, where the first device based on a solid state hard disk and the second device based on a mechanical hard disk are packaged and abstracted into one device, and the cache device is virtually generated by an operating system, and implements guiding and splitting of data read/write (I/O) through a block device driver, and finally lands data in the first device or the second device.
A Bucket (Bucket) is understood to be a storage space included in a device, a container of storage objects, and may be equal to the size of a physical sector, typically 512 bytes. In the embodiment of the disclosure, a first device and a second device in a cache device are each divided into a storage bucket with a fixed size for managing data. The logical space of the cache device in this embodiment is composed of a first device and a second device.
The cache device in this embodiment may be provided with a plurality of types of buckets, which are used to represent different storage locations and states, where the types of the buckets support dynamic adjustment. Optionally, the different types of buckets include a first bucket, a second bucket, and a third bucket, the first bucket being located in the first device, the second bucket being located in the second device, the third bucket being located in the first device and the second device.
Exemplary, fig. 3 is a schematic diagram of a bucket in a cache device provided in an embodiment of the present disclosure, as shown in fig. 3, three types of buckets may be provided in this embodiment, including a first bucket, a second bucket, and a third bucket, in which a block with a number of 1 is used to represent the first bucket, a block with a number of 2 is used to represent the second bucket, a block with a number of 3 is used to represent the third bucket, a block with an unlabeled number may be an idle bucket, and the type of the idle bucket may be dynamically adjusted. The first storage barrel is arranged in the first equipment, wherein data are stored in the first equipment only, and recovery is forbidden; the second storage barrel is arranged in the second device, and the data in the second storage barrel is only stored in the second device; the third bucket may be disposed in the first device and the second device, wherein the data stores identical copies in both the first device and the second device, and the third bucket in the first device supports reclamation.
The target data may be related data affecting data processing performance in the cache device and may include multiple types of data, where the target data in this embodiment may include read-write data, device usage rate, and dirty data duty ratio, and the read-write data may be related data of data reading and data writing, for example, the read-write data may include read-write data amount, read data load duty ratio, write data load duty ratio, and the like; the device usage may be a usage of a first device in the cache device; the dirty data duty cycle may be the ratio of dirty data to total data in the cache device.
Specifically, the cache device may acquire the target data according to a preset time interval, where the preset time interval may be a quantitative value or an incremental value, and specifically may be set according to an actual situation.
Step 102, determining a target mode corresponding to the caching device currently according to the target data.
The target mode includes a plurality of modes, and in this embodiment, the target mode may include an idle mode, a read mode, a write mode, and an equalization mode, where allocation ratios of different types of buckets in different modes are different.
The idle mode may be a mode in a scenario where the number of data read operations and data write operations in the cache device as a whole is low. The read mode may be a mode of a scenario in which there are more data read operations in the cache device. The write mode may be a mode of a scenario in which data write operations in the cache device are more. The equalization mode may be a mode of a scenario in which a data read operation and a data write operation in the cache device are relatively equalized.
In this embodiment, determining, according to the target data, the target mode currently corresponding to the cache device may include: if the read-write data quantity of the read-write data in the target data is smaller than the read-write threshold value, determining that the target mode is an idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value; and if the device utilization rate of the first device is greater than the utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is a read mode.
The read-write data amount may be the total number of data read operations and data write operations, among others. The usage threshold may be set according to practical situations, for example, the usage threshold may be 60%. The read load ratio is the ratio of the data read operation to the amount of read/write data. The first preset threshold is a threshold of the read load duty ratio, and may be set according to practical situations, for example, the first preset threshold may be 65%.
In this embodiment, after the target data is obtained, the read-write data amount of the read-write data in the target data may be extracted, and the read-write data amount is compared with the read-write threshold, and if the read-write data amount is smaller than the read-write threshold, it may be determined that the target mode is the idle mode. If the read-write data amount is greater than or equal to the read-write threshold, device usage of the first device may be compared to the usage threshold. If the device usage of the first device is greater than the usage threshold and the read load duty cycle is greater than a first preset threshold, the target mode may be determined to be a read mode.
In the embodiment of the present disclosure, determining, according to target data, a target mode currently corresponding to a cache device may include: and if the device utilization rate of the first device is larger than the utilization rate threshold, the write load ratio in the read-write data is larger than the second preset threshold, and the dirty data ratio is larger than the third preset threshold, determining that the target mode is a write mode. Optionally, determining, according to the target data, the target mode currently corresponding to the cache device may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio and the read load ratio in the read-write data are the same, and the dirty data ratio is greater than a fourth preset threshold, determining that the target mode is an equilibrium mode.
The write duty cycle is the ratio of the data write operation to the amount of read and write data. The second preset threshold is a threshold of the write duty ratio, and may be set according to actual situations, and the second preset threshold may be the same as or different from the first preset threshold. The third preset threshold and the fourth preset threshold may be two thresholds set for the dirty data duty cycle, for example, the third preset threshold may be 60% and the fourth preset threshold may be 50%.
In this embodiment, when the set usage rate of the first device is greater than the usage rate threshold, the target mode may be determined according to the write load duty ratio and the dirty data duty ratio in the read-write data, and when the write load duty ratio in the read-write data is greater than the second preset threshold and the dirty data duty ratio is greater than the third preset threshold, the target mode may be determined to be the write mode; when the write load duty ratio and the read load duty ratio in the read-write data are the same and are both 50%, and the dirty data duty ratio is greater than the fourth preset threshold, it may be determined that the target mode is the balanced mode.
Step 103, data processing is performed based on the target pattern.
The data processing may include the data read operation and the data write operation described above, among others.
In the embodiment of the present disclosure, after the cache device obtains the target mode, data processing may be performed according to allocation ratios of different types of buckets corresponding to the target mode.
Optionally, when the target mode is an idle mode, the data to be written is written into a first bucket in the first device, and the data in the first device and the second device are read. When the target equipment is in an idle mode, the allocation proportion of the storage barrels of different types in the cache equipment is not limited because the read-write data quantity is low, and the data to be written can be directly written into the first storage barrel in the first equipment; if the data is included in the bucket in the first device, the data reading operation can be directly performed, otherwise, the data can be read from the second device, and the read data is written into a free bucket in the first device, and the type of the free bucket is changed into a third bucket.
Optionally, when the target mode is the read mode, setting the duty ratio of the third storage bucket to be greater than the first duty ratio threshold; if the available first storage bucket exists in the first device, writing the data to be written into the available first storage bucket; otherwise, after recycling the third storage barrel occupied in the first device, writing the data to be written.
When the target mode is the read mode, since the data writing operation is less, the duty ratio of the third bucket may be set to be greater than the first duty ratio threshold, and the sum of the duty ratios of the first bucket and the second bucket may be less than or equal to the first duty ratio threshold, which is set to be smaller. The first duty cycle threshold may be a larger duty cycle value, for example the first duty cycle threshold may be 80%. In the read mode, if a data writing request is received, the data to be written can be written into an available first storage bucket of the first device; if there is no available first bucket in the first device, a reclamation algorithm may be adopted to reclaim the occupied third bucket in the first device, and then the data to be written may be written for the reclaimed free bucket, where the type of the free bucket is transformed into the first bucket. The reclamation algorithm may be set according to the actual situation, for example, the reclamation algorithm may employ a least recently used (Least Recently Used, LRU) algorithm.
Optionally, when the target mode is the write mode, setting the duty ratio of the first storage bucket to be greater than the second duty ratio threshold; and if the duty ratio of the first storage bucket is larger than a third duty ratio threshold value, writing the data to be written after the data in the first storage bucket is written back to the second equipment, wherein the third duty ratio threshold value is larger than the second duty ratio threshold value. The second duty ratio threshold and the third duty ratio threshold are both duty ratio values for the first storage bucket, except that the third duty ratio threshold is greater than the second duty ratio threshold, and specific values can be set according to actual conditions.
When the target mode is a write mode, since there are fewer data read operations, the duty cycle of the first bucket may be set to be greater than the second duty cycle threshold and the write rate may be controlled based on the number of buckets available in the first device. In the write mode, as the data writing operation is executed, the number of the first storage barrels is increased, the duty ratio of the first storage barrels can be compared with a third duty ratio threshold value, if the duty ratio of the first storage barrels is larger than the third duty ratio threshold value, part of the data of the first storage barrels can be written back to the second equipment, after the writing is completed, the type of the first storage barrels is adjusted to be the third storage barrels, then the third storage barrels can be recovered, namely, the type of the third storage barrels is adjusted to be the idle storage barrels, the data to be written is written into the idle storage barrels, and the type of the idle storage barrels is converted to be the first storage barrels.
Optionally, when the target mode is the equalization mode, the number of the first buckets in the first device is set to be greater than the number of the third buckets.
When the target mode is the equalization mode, since the proportion of the data reading operation and the data writing operation is close, based on the application angle, the data writing operation generally needs to read and modify the data before writing new data, so that the storage barrels in the first device can be preferentially segmented into the first storage barrels in the equalization mode, that is, the number of the first storage barrels is set to be larger than the number of the third storage barrels so as to preferentially satisfy the data writing operation. With the execution of data writing operation, the number of the first storage barrels is increased, the data of part of the first storage barrels can be written back to the second equipment, the type of the first storage barrel is adjusted to be a third storage barrel after the writing is completed, then the third storage barrel can be recovered, namely, the type of the third storage barrel is adjusted to be an idle storage barrel, the data to be written is written into the idle storage barrel, and the type of the idle storage barrel is changed into the first storage barrel, so that the cycle is performed. The dynamic adjustment of the allocation proportion of the storage barrels can accelerate the data writing performance, inhibit the reading performance, further control the number of data writing operations of the application program from another layer, and avoid the influence of the normal operation of the corresponding program caused by the consumption of the first storage barrel in the first equipment.
In the scheme, the first equipment and the second equipment of the cache equipment are provided with the three different types of storage barrels for executing data processing, and the allocation proportion of the different types of storage barrels is dynamically adjusted in different modes, so that the cache equipment can achieve optimal data reading and writing performance, and the availability of the cache equipment is enhanced.
In the embodiment of the present disclosure, the cache device may record the positions of the storage barrel in the first device and the second device by using a b+ tree index, where the structure recorded in the b+ tree node is ckey.
Because the storage capacity of the cache device is larger, the number of divided storage barrels is larger, in this embodiment, the storage barrels can be managed by adopting a b+ tree, the positions of the storage barrels in the first device and the second device, namely, the mapping relationship, are recorded by adopting a b+ tree index, each b+ tree node corresponds to one storage barrel, and the recorded structure is ckey.
Through the described ckey, the storage barrels of the first device and the second device in the cache device can be managed at the same time, so that the need of establishing indexes for both devices is avoided, and the search performance is greatly improved. And the first equipment can be used when the second equipment is full through the ckey, namely all the storage barrels of the second equipment are set to be second storage barrels, and all the storage barrels of the first equipment are first storage barrels, so that the influence of the magnetic disk full on the normal operation of the application is avoided.
According to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data duty ratio; determining a current corresponding target mode of the cache equipment according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution ratio of the storage barrels of different types in the different modes is different. By adopting the technical scheme, the buffer equipment comprising the solid state disk and the mechanical hard disk is divided into different types of storage barrels, the current mode can be determined in real time according to the data such as the read-write, the use and the like of the buffer equipment, and the data read-write is executed based on the distribution proportion of the storage barrels of different types under the current mode, so that the dynamic adjustment of the mode state of the buffer equipment for processing the data is realized, the performance of the buffer equipment is greatly improved, the defects that resources are consumed and complex load conditions cannot be handled in the related technology are avoided, and the availability of the buffer equipment is enhanced.
Fig. 4 is a flow chart of another data processing method of a cache device according to an embodiment of the present disclosure, where the data processing method of the cache device is further optimized based on the foregoing embodiment. As shown in fig. 4, the method includes:
step 201, obtaining target data.
The target data comprises read-write data, device usage and dirty data duty ratio.
Step 202, determining a target mode corresponding to the caching device currently according to the target data.
The cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution ratio of the storage barrels of different types in the different modes is different. Optionally, the different types of buckets include a first bucket, a second bucket and a third bucket, the first bucket is located in the first device, the second bucket is located in the second device, the third bucket is located in the first device and the second device, and the types of the buckets support dynamic adjustment.
The target modes may include an idle mode, a read mode, a write mode, and an equalization mode.
Optionally, determining, according to the target data, the target mode currently corresponding to the cache device may include: if the read-write data quantity of the read-write data in the target data is smaller than the read-write threshold value, determining that the target mode is an idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value; and if the device utilization rate of the first device is greater than the utilization rate threshold value and the read load ratio in the read-write data is greater than a first preset threshold value, determining that the target mode is a read mode.
Optionally, determining, according to the target data, the target mode currently corresponding to the cache device may include: and if the device utilization rate of the first device is larger than the utilization rate threshold, the write load ratio in the read-write data is larger than the second preset threshold, and the dirty data ratio is larger than the third preset threshold, determining that the target mode is a write mode.
Optionally, determining, according to the target data, the target mode currently corresponding to the cache device may include: and if the device utilization rate of the first device is greater than the utilization rate threshold, the write load ratio and the read load ratio in the read-write data are the same, and the dirty data ratio is greater than a fourth preset threshold, determining that the target mode is an equilibrium mode.
After step 202, step 203, step 204-step 205, step 206-step 207 or step 208 may be performed, which may be specifically determined according to the actual situation.
Step 203, when the target mode is the idle mode, writing the data to be written into a first storage bucket in the first device, and reading the data in the first device and the second device.
Step 204, when the target mode is the read mode, the duty cycle of the third bucket is set to be greater than the first duty cycle threshold.
Step 205, if there is a first bucket available in the first device, writing the data to be written into the first bucket available; otherwise, after recycling the third storage barrel occupied in the first device, writing the data to be written.
Step 206, when the target mode is the write mode, the duty cycle of the first bucket is set to be greater than the second duty cycle threshold.
Step 207, if the duty ratio of the first storage bucket is greater than the third duty ratio threshold, writing the data to be written after writing the data in the first storage bucket back to the second device, wherein the third duty ratio threshold is greater than the second duty ratio threshold.
Step 208, when the target mode is the equalization mode, the number of the first buckets in the first device is set to be greater than the number of the third buckets.
Exemplary, fig. 5 is a schematic diagram of a mode conversion provided by an embodiment of the present disclosure. As shown in fig. 5, the target mode of the cache device may be determined and dynamically adjusted according to the load condition, the device usage, and other data within a preset time, which illustrates that the idle mode may be dynamically switched between the read mode, the write mode, and the equalization mode, and the data reading operation and the data writing operation may be performed based on the switched modes. It will be appreciated that dynamic switching between the read mode, the write mode and the equalization mode may also be implemented according to the actual situation (not shown in the figures).
In the embodiment of the present disclosure, the cache device may record the positions of the storage barrel in the first device and the second device by using a b+ tree index, where the structure recorded in the b+ tree node is ckey.
Exemplary, fig. 6 is a schematic diagram of a b+ tree provided by the embodiment of the present disclosure, where a b+ tree index is used to record the locations of storage buckets in a first device and a second device, that is, a mapping relationship, and each b+ tree node is a group of ranked ckeys, and corresponds to one storage bucket, where the recorded structure is ckey.
Fig. 7 is a schematic diagram of a node according to an embodiment of the disclosure. As shown in fig. 7, the record ckey of the b+ tree node may include a logical block address (Logical Block Address, LBA), a first device identification, a second device identification, a first device offset, a second device offset, a generation number, and a type identification. Wherein the LBAs are used to represent the numbers of the buckets in the cache device, represent the logical locations, the B+ tree also performs a sorting of the ckeys within the nodes based thereon, and the LBAs with the largest node are used as the node's value for sorting in the B+ tree. The first device identifier may be represented by a CID, which is used to represent the identifier of the first device, where the number of the first devices in the present solution may be multiple. The second device identifier may be a BID, which is used to represent the identifier of the second device, where the number of second devices in the present solution may be multiple. The first device offset may be represented by Coffset, which is used to indicate that the bucket of the cache device corresponds to the bucket number of the first device, and a specific location may be found by combining the CID. The second device offset may be characterized by using Boffset, which is used to indicate that the bucket of the cache device corresponds to the bucket number of the second device, and a specific location may be found by combining BID. The generation number can be represented by Gen and used for representing the generation number of the ckey, and the generation number is used for modifying the B+ tree of the disk based on the log because the B+ tree exists in the memory and needs to be persisted into the disk, the B+ tree in the disk is adjusted to have huge performance loss. The Type identifier may be represented by a Type and is used for indicating the Type of the bucket, and may specifically include the first bucket, the second bucket and the third bucket.
Through the described ckey, the storage barrels of the first device and the second device in the cache device can be managed at the same time, so that the need of establishing indexes for both devices is avoided, and the search performance is greatly improved. And the first equipment can be used when the second equipment is full through the ckey, namely all the storage barrels of the second equipment are set to be second storage barrels, and all the storage barrels of the first equipment are first storage barrels, so that the influence of the magnetic disk full on the normal operation of the application is avoided.
According to the scheme, the first equipment based on the solid state disk and the second equipment based on the mechanical hard disk in the cache equipment can be uniformly divided into storage barrels with fixed sizes, one B+ tree is adopted to manage the storage barrels of the two equipment, values in the ckey are dynamically adjusted according to real-time read-write conditions and equipment service conditions, different load mode algorithms are used, the problem that the overall performance of the cache equipment is low due to poor read-write flow control in the related technology is solved by adopting a dynamic flexible algorithm, the performance maximization can be achieved, the problems that the traditional cache equipment is easy to fill and cannot increase the overall storage space are avoided, and the usability is improved.
According to the data processing scheme of the cache device, target data are obtained, wherein the target data comprise read-write data, device utilization rate and dirty data duty ratio; determining a current corresponding target mode of the cache equipment according to the target data; performing data processing based on the target pattern; the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution ratio of the storage barrels of different types in the different modes is different. By adopting the technical scheme, the buffer equipment comprising the solid state disk and the mechanical hard disk is divided into different types of storage barrels, the current mode can be determined in real time according to the data such as the read-write, the use and the like of the buffer equipment, and the data read-write is executed based on the distribution proportion of the storage barrels of different types under the current mode, so that the dynamic adjustment of the mode state of the buffer equipment for processing the data is realized, the performance of the buffer equipment is greatly improved, the defects that resources are consumed and complex load conditions cannot be handled in the related technology are avoided, and the availability of the buffer equipment is enhanced.
Fig. 8 is a schematic structural diagram of a data processing apparatus of a cache device according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device.
As shown in fig. 8, the apparatus includes:
a data acquisition module 301, configured to acquire target data, where the target data includes read-write data, a device usage rate, and a dirty data duty ratio;
a mode determining module 302, configured to determine a target mode currently corresponding to the cache device according to the target data;
a data processing module 303 for performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage barrels of different types in different modes is different.
Optionally, the target mode includes an idle mode, a read mode, a write mode, and an equalization mode.
Optionally, the mode determining module 302 is specifically configured to:
if the read-write data quantity of the read-write data in the target data is smaller than a read-write threshold value, determining that the target mode is the idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value;
And if the equipment utilization rate of the first equipment is larger than the utilization rate threshold value and the read load duty ratio in the read-write data is larger than a first preset threshold value, determining that the target mode is the read mode.
Optionally, the mode determining module 302 is specifically configured to:
and if the equipment utilization rate of the first equipment is larger than the utilization rate threshold, the write load duty ratio in the read-write data is larger than a second preset threshold, and the dirty data duty ratio is larger than a third preset threshold, determining that the target mode is the write mode.
Optionally, the mode determining module 302 is specifically configured to:
and if the equipment utilization rate of the first equipment is larger than a utilization rate threshold value, the write load duty ratio and the read load duty ratio in the read-write data are the same, and the dirty data duty ratio is larger than a fourth preset threshold value, determining that the target mode is the balance mode.
Optionally, the different types of buckets include a first bucket, a second bucket and a third bucket, the first bucket is located in the first device, the second bucket is located in the second device, the third bucket is located in the first device and the second device, and the types of the buckets support dynamic adjustment.
Optionally, the data processing module 303 is specifically configured to:
and when the target mode is the idle mode, writing data to be written into a first storage bucket in the first device, and reading the data in the first device and the second device.
Optionally, the data processing module 303 is specifically configured to:
when the target mode is the read mode, setting the duty ratio of the third storage bucket to be larger than a first duty ratio threshold;
if available first storage barrels exist in the first equipment, writing data to be written into the available first storage barrels; otherwise, after recycling the third storage barrel occupied in the first device, writing the data to be written.
Optionally, the data processing module 303 is specifically configured to:
when the target mode is the writing mode, setting the duty ratio of the first storage bucket to be larger than a second duty ratio threshold;
and if the duty ratio of the first storage bucket is larger than a third duty ratio threshold value, writing the data to be written after the data in the first storage bucket is written back to the second equipment, wherein the third duty ratio threshold value is larger than the second duty ratio threshold value.
Optionally, the data processing module 303 is specifically configured to:
and when the target mode is the balanced mode, setting the number of the first storage buckets in the first equipment to be larger than the number of the third storage buckets.
Optionally, the cache device records the positions of the storage barrel in the first device and the second device by using a b+ tree index, and a structure recorded in the b+ tree node is ckey.
The data processing device of the cache device provided by the embodiment of the disclosure can execute the data processing method of the cache device provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 9, the electronic device 400 includes one or more processors 401 and memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities and may control other components in the electronic device 400 to perform desired functions.
Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 401 to implement the data processing method and/or other desired functions of the cache device of the embodiments of the present disclosure described above. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
In addition, the input device 403 may also include, for example, a keyboard, a mouse, and the like.
The output device 404 may output various information to the outside, including the determined distance information, direction information, and the like. The output device 404 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 400 that are relevant to the present disclosure are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, and the like are omitted. In addition, electronic device 400 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the data processing method of the cache apparatus provided by the embodiments of the present disclosure.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, which when executed by a processor, cause the processor to perform a data processing method of a cache device provided by embodiments of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A data processing method of a cache device, comprising:
acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data duty ratio;
determining a target mode corresponding to the cache equipment currently according to the target data;
performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage barrels of different types in different modes is different.
2. The method of claim 1, wherein the target modes include an idle mode, a read mode, a write mode, and an equalization mode.
3. The method of claim 2, wherein determining the target mode currently corresponding to the cache device according to the target data comprises:
if the read-write data quantity of the read-write data in the target data is smaller than a read-write threshold value, determining that the target mode is the idle mode; otherwise, comparing the equipment utilization rate of the first equipment with a utilization rate threshold value;
And if the equipment utilization rate of the first equipment is larger than the utilization rate threshold value and the read load duty ratio in the read-write data is larger than a first preset threshold value, determining that the target mode is the read mode.
4. A method according to claim 3, wherein determining the target mode currently corresponding to the cache device according to the target data comprises:
and if the equipment utilization rate of the first equipment is larger than the utilization rate threshold, the write load duty ratio in the read-write data is larger than a second preset threshold, and the dirty data duty ratio is larger than a third preset threshold, determining that the target mode is the write mode.
5. A method according to claim 3, wherein determining the target mode currently corresponding to the cache device according to the target data comprises:
and if the equipment utilization rate of the first equipment is larger than a utilization rate threshold value, the write load duty ratio and the read load duty ratio in the read-write data are the same, and the dirty data duty ratio is larger than a fourth preset threshold value, determining that the target mode is the balance mode.
6. The method of claim 2, wherein the different types of buckets include a first bucket, a second bucket, and a third bucket, the first bucket being located in the first device, the second bucket being located in the second device, the third bucket being located in the first device and the second device, the types of buckets supporting dynamic adjustment.
7. The method of claim 6, wherein performing data processing based on the target pattern comprises:
and when the target mode is the idle mode, writing data to be written into a first storage bucket in the first device, and reading the data in the first device and the second device.
8. The method of claim 6, wherein performing data processing based on the target pattern comprises:
when the target mode is the read mode, setting the duty ratio of the third storage bucket to be larger than a first duty ratio threshold;
if available first storage barrels exist in the first equipment, writing data to be written into the available first storage barrels; otherwise, after recycling the third storage barrel occupied in the first device, writing the data to be written.
9. The method of claim 6, wherein performing data processing based on the target pattern comprises:
when the target mode is the writing mode, setting the duty ratio of the first storage bucket to be larger than a second duty ratio threshold;
and if the duty ratio of the first storage bucket is larger than a third duty ratio threshold value, writing the data to be written after the data in the first storage bucket is written back to the second equipment, wherein the third duty ratio threshold value is larger than the second duty ratio threshold value.
10. The method of claim 6, wherein performing data processing based on the target pattern comprises:
and when the target mode is the balanced mode, setting the number of the first storage buckets in the first equipment to be larger than the number of the third storage buckets.
11. The method of claim 1, wherein the cache device records locations of buckets in the first device and the second device using a b+ tree index, and wherein the structure of records in the b+ tree node is ckey.
12. A data processing apparatus of a caching device, comprising:
the data acquisition module is used for acquiring target data, wherein the target data comprises read-write data, equipment utilization rate and dirty data duty ratio;
the mode determining module is used for determining a target mode corresponding to the caching device currently according to the target data;
a data processing module for performing data processing based on the target pattern;
the cache device comprises a first device based on a solid state disk and a second device based on a mechanical hard disk, wherein the first device and the second device comprise storage barrels of different types, the target mode comprises a plurality of modes, and the distribution proportion of the storage barrels of different types in different modes is different.
13. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement a data processing method of a cache device according to any of the preceding claims 1-11.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the data processing method of the caching device of any one of the preceding claims 1-11.
CN202110642174.8A 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment Active CN113377291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642174.8A CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642174.8A CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Publications (2)

Publication Number Publication Date
CN113377291A CN113377291A (en) 2021-09-10
CN113377291B true CN113377291B (en) 2023-07-04

Family

ID=77573182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642174.8A Active CN113377291B (en) 2021-06-09 2021-06-09 Data processing method, device, equipment and medium of cache equipment

Country Status (1)

Country Link
CN (1) CN113377291B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535096B (en) * 2021-09-16 2022-01-11 深圳创新科技术有限公司 Virtual NVMe solid-state drive storage construction method and device
CN113805812B (en) * 2021-09-22 2024-03-05 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN114356213B (en) * 2021-11-29 2023-07-21 重庆邮电大学 Parallel space management method for NVM wear balance under NUMA architecture
CN115826882B (en) * 2023-02-15 2023-05-30 苏州浪潮智能科技有限公司 Storage method, device, equipment and storage medium
CN116450054B (en) * 2023-06-16 2023-09-26 成都泛联智存科技有限公司 IO request processing method, device, host and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902474A (en) * 2014-04-11 2014-07-02 华中科技大学 Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN106502592A (en) * 2016-10-26 2017-03-15 郑州云海信息技术有限公司 Solid state hard disc caching block recovery method and system
CN107015763A (en) * 2017-03-03 2017-08-04 北京中存超为科技有限公司 Mix SSD management methods and device in storage system
CN107463424A (en) * 2016-06-02 2017-12-12 北京金山云网络技术有限公司 A kind of virtual machine migration method and device
CN110502188A (en) * 2019-08-01 2019-11-26 苏州浪潮智能科技有限公司 A kind of date storage method and device based on data base read-write performance
CN111124304A (en) * 2019-12-19 2020-05-08 北京浪潮数据技术有限公司 Data migration method and device, electronic equipment and storage medium
CN111209253A (en) * 2019-12-30 2020-05-29 河南创新科信息技术有限公司 Distributed storage equipment performance improving method and device and distributed storage equipment
CN112130769A (en) * 2020-09-18 2020-12-25 苏州浪潮智能科技有限公司 Mechanical hard disk data processing method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902474A (en) * 2014-04-11 2014-07-02 华中科技大学 Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN107463424A (en) * 2016-06-02 2017-12-12 北京金山云网络技术有限公司 A kind of virtual machine migration method and device
CN106502592A (en) * 2016-10-26 2017-03-15 郑州云海信息技术有限公司 Solid state hard disc caching block recovery method and system
CN107015763A (en) * 2017-03-03 2017-08-04 北京中存超为科技有限公司 Mix SSD management methods and device in storage system
CN110502188A (en) * 2019-08-01 2019-11-26 苏州浪潮智能科技有限公司 A kind of date storage method and device based on data base read-write performance
CN111124304A (en) * 2019-12-19 2020-05-08 北京浪潮数据技术有限公司 Data migration method and device, electronic equipment and storage medium
CN111209253A (en) * 2019-12-30 2020-05-29 河南创新科信息技术有限公司 Distributed storage equipment performance improving method and device and distributed storage equipment
CN112130769A (en) * 2020-09-18 2020-12-25 苏州浪潮智能科技有限公司 Mechanical hard disk data processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于磁盘和固态硬盘的混合存储系统研究综述;陈震;刘文洁;张晓;卜海龙;;计算机应用(第05期);全文 *

Also Published As

Publication number Publication date
CN113377291A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN113377291B (en) Data processing method, device, equipment and medium of cache equipment
KR101357397B1 (en) Method for tracking memory usages of a data processing system
US8521986B2 (en) Allocating storage memory based on future file size or use estimates
US9081702B2 (en) Working set swapping using a sequentially ordered swap file
US7653799B2 (en) Method and apparatus for managing memory for dynamic promotion of virtual memory page sizes
US9058212B2 (en) Combining memory pages having identical content
US10261918B2 (en) Process running method and apparatus
CN112632069B (en) Hash table data storage management method, device, medium and electronic equipment
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
CN112115067A (en) Flash memory physical resource set management device and method and computer readable storage medium
CN109582649A (en) A kind of metadata storing method, device, equipment and readable storage medium storing program for executing
CN112214162A (en) Storage device and control method
US20230100110A1 (en) Computing resource management method, electronic equipment and program product
CN115543222A (en) Storage optimization method, system, equipment and readable storage medium
CN112860381B (en) Virtual machine memory capacity expansion method and system based on Shenwei processor
CN108563507A (en) A kind of EMS memory management process, device, equipment and readable storage medium storing program for executing
US9857864B1 (en) Systems and methods for reducing power consumption in a memory architecture
KR102456017B1 (en) Apparatus and method for file sharing between applications
JP4792065B2 (en) Data storage method
CN116820861B (en) Method and device for testing enterprise-level solid state disk garbage collection mechanism
US11829341B2 (en) Space-efficient persistent hash table data structure
US20210263648A1 (en) Method for managing performance of logical disk and storage array
US11409665B1 (en) Partial logical-to-physical (L2P) address translation table for multiple namespaces
US20230418643A1 (en) Improved memory management for busy virtual machine guests
CN116719609A (en) Performance optimization method of JavaScript engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant