CN117707989A - Method and device for creating read cache layer and storage medium - Google Patents

Method and device for creating read cache layer and storage medium Download PDF

Info

Publication number
CN117707989A
CN117707989A CN202311705410.1A CN202311705410A CN117707989A CN 117707989 A CN117707989 A CN 117707989A CN 202311705410 A CN202311705410 A CN 202311705410A CN 117707989 A CN117707989 A CN 117707989A
Authority
CN
China
Prior art keywords
zram
virtual device
target
server
creating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311705410.1A
Other languages
Chinese (zh)
Inventor
王丽红
过晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Unicom Cloud Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Digital Technology Co Ltd
Unicom Cloud Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Digital Technology Co Ltd, Unicom Cloud Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202311705410.1A priority Critical patent/CN117707989A/en
Publication of CN117707989A publication Critical patent/CN117707989A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a method, a device and a storage medium for creating a read cache layer, relates to the technical field of computers, and can solve the problem that the power consumption of an HDD storage disk is too high at present. The method comprises the following steps: acquiring the idle rate of the memory space of each server in the cloud pool, wherein an operating system of the server has the capacity of creating a memory optimization ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk; creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being greater than or equal to a preset threshold value; at least one ZRAM virtual device is set as a read cache layer of the HDD storage hard disk. The method and the device can effectively reduce the power consumption of the HDD storage disk.

Description

Method and device for creating read cache layer and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for creating a read cache layer, and a storage medium.
Background
With the continuous maturation of various software and hardware technologies of solid state disks (solid state drive, SSD), SSD disk capacity is also continuously increasing, and advantages are more and more obvious. However, for historical reasons, there are still a large number of conventional Hard Disk Drives (HDDs) in the cloud pool of the production environment for cloud vendors.
HDD storage disks typically read stored data through a head swing arm. However, with the increase of capacity, the amplitude of the head swing arm is larger when the HDD storage disk is randomly read, and there is a problem that the HDD storage disk consumes too much power.
Disclosure of Invention
The method, the device and the storage medium for creating the read cache layer solve the problem that the power consumption of the conventional HDD storage disk is too high, can create the read cache layer for the HDD storage disk, and reduce the large-scale random read access to the HDD storage disk, so that the power consumption of the HDD storage disk is effectively reduced.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a method for creating a read cache layer, where the method includes: acquiring the idle rate of the memory space of each server in the cloud pool, wherein an operating system of the server has the capacity of creating a memory optimization ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk; creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being greater than or equal to a preset threshold value; at least one ZRAM virtual device is set as a read cache layer of the HDD storage hard disk.
With reference to the first aspect, in one possible implementation manner, the method further includes: dividing a target area of the memory space according to a preset memory size to obtain at least one sub-target area; for each sub-target area, one ZRAM virtual device is created in each sub-target area.
With reference to the first aspect, in one possible implementation manner, the method further includes: adding at least one ZRAM virtual device into the HDD storage hard disk; creating a ZRAM storage memory based on the at least one ZRAM virtual device; and setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
With reference to the first aspect, in one possible implementation manner, the method further includes: based on a preset period, monitoring the idle rate of the memory space of the target server; and if the idle rate of the memory space of the target server is smaller than a preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
With reference to the first aspect, in one possible implementation manner, the method further includes: hanging the target server into a target list; the server in the target list waits for the recovery of the target area; and sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
In a second aspect, the present application provides a device for creating a read cache layer, where the device includes: a communication unit and a processing unit; a communication unit, configured to obtain an idle rate of a memory space of each server in Yun Chi, where an operating system of the server has a capability of creating a memory-optimized ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk; the processing unit is used for creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being greater than or equal to a preset threshold value; and the processing unit is also used for setting the at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk.
With reference to the second aspect, in one possible implementation manner, the processing unit is specifically configured to: dividing a target area of the memory space according to a preset memory size to obtain at least one sub-target area; for each sub-target area, one ZRAM virtual device is created in each sub-target area.
With reference to the second aspect, in one possible implementation manner, the processing unit is specifically configured to: adding at least one ZRAM virtual device into the HDD storage hard disk; creating a ZRAM storage memory based on the at least one ZRAM virtual device; and setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
With reference to the second aspect, in a possible implementation manner, the processing unit is further configured to: based on a preset period, monitoring the idle rate of the memory space of the target server; and if the idle rate of the memory space of the target server is smaller than a preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
With reference to the second aspect, in one possible implementation manner, the processing unit is specifically configured to: hanging the target server into a target list; the server in the target list waits for the recovery of the target area; and sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
In a third aspect, the present application provides a device for creating a read cache layer, where the device includes: a processor and a communication interface; the communication interface is coupled to a processor for running a computer program or instructions to implement the method of creating a read cache layer as described in any one of the possible implementations of the first aspect and the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a terminal, cause the terminal to perform a method of creating a read cache layer as described in any one of the possible implementations of the first aspect and the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on a read cache layer creation means, cause the read cache layer creation means to perform the read cache layer creation method as described in any one of the possible implementations of the first aspect and the first aspect.
In a sixth aspect, the present application provides a chip comprising a processor and a communication interface, the communication interface and the processor being coupled, the processor being for running a computer program or instructions to implement a method of creating a read cache layer as described in any one of the possible implementations of the first aspect and the first aspect.
In particular, the chip provided in the present application further includes a memory for storing a computer program or instructions.
It should be noted that the above-mentioned computer instructions may be stored in whole or in part on a computer-readable storage medium. The computer readable storage medium may be packaged together with the processor of the apparatus or may be packaged separately from the processor of the apparatus, which is not limited in this application.
In a seventh aspect, the present application provides a system for creating a read cache layer, including: a server and an HDD storage hard disk, wherein the server is configured to perform the method of creating a read cache layer as described in any one of the possible implementations of the first aspect and the first aspect.
For descriptions of the second aspect through the seventh aspect in the present application, reference may be made to the detailed description of the first aspect; also, the advantageous effects described in the second aspect to the seventh aspect may refer to the advantageous effect analysis of the first aspect, and are not described herein.
In this application, the names of the creation means of the read cache layer are not limited to the devices or functional modules themselves, and in actual implementation, these devices or functional modules may appear under other names. Insofar as the function of each device or function module is similar to the present application, it is within the scope of the claims of the present application and the equivalents thereof.
These and other aspects of the present application will be more readily apparent from the following description.
The scheme at least brings the following beneficial effects: based on the above technical scheme, the method for creating a read cache layer firstly obtains the idle rate of the memory space of a server with the capacity of creating a memory optimization ZRAM virtual device in a cloud pool. Then, at least one ZRAM virtual device is created in a target area of a memory space of the target server, and the idle rate of the target server is larger than or equal to a preset threshold value, which indicates that more idle parts exist in the memory of the target server, and the target server has the capability of providing a read cache for the HDD storage hard disk. Further, the at least one ZRAM virtual device is configured as a read cache layer of the HDD storage hard disk. Compared with the problem that the power consumption of the conventional HDD storage disk is too high, the technical scheme can create the read cache layer for the HDD storage disk, reduce the large-scale random read access to the HDD storage disk, and therefore effectively reduce the power consumption of the HDD storage disk.
Drawings
Fig. 1 is a schematic architecture diagram of a cloud pool according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a system for creating a read cache layer according to an embodiment of the present application;
fig. 3 is a schematic hardware structure of a device for creating a read cache layer according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for creating a read cache layer according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for creating a read cache layer according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a device for creating a read cache layer according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or for distinguishing between different processes of the same object and not for describing a particular sequential order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
The following explains the terms related to the embodiments of the present application, so as to facilitate the understanding of the reader.
(1) Yun Chi primary resource pool
As shown in fig. 1, the cloud pool includes a core backbone switch, a cloud pool outlet firewall and a plurality of hardware resources, wherein the standard is maximum, and the most core resource pool is mainly divided into two main types, namely a resource pool of a computing type and a resource pool of a storage type.
In particular, the resource pools of computing classes generally include a computing resource pool, a bare metal resource pool, and a graphics processor (graphics processing unit, GPU) resource pool. The resource pools of a storage class typically include a block storage pool, a file storage pool, an object storage pool. In addition, the Yun Chi includes a secure resource pool and a cloud management pool in addition to the resource pool of the computing class and the resource pool of the storage class.
The computing resource pool is composed of N computing servers and an access switch. The bare metal resource pool is composed of N bare metal servers and an access switch. The GPU resource pool is composed of N GPU servers and access switches. Both the block storage pool and the file storage pool are made up of N storage servers and access switches. The object storage pool is composed of N bare storage servers and an access switch. The secure resource pool is composed of N secure servers and an access switch. The cloud management pool is composed of N management servers and an access switch.
For a storage pool, taking a block storage pool as an example, the storage pool further includes several different types of storage servers, where each type of storage server has several storage servers:
1) Several NVME SSD full flash memory servers (providing a very fast cloud disk);
2) Several SATA SSD full flash memory servers (providing full flash cloud drives);
3) Several SSD+HDD hybrid flash memory servers (providing efficient enhanced cloud disk);
4) Several HDD storage servers (providing efficient cloud disk).
The performance provided by the cloud disk of 1) -4) above decreases in sequence. The file storage pool and object storage pool may also include several storage servers of 1) -4) above, providing file storage and object storage corresponding to different performance indicators, and are not described in detail herein.
(2) Memory optimization (ZRAM) techniques
The ZRAM is a memory optimization technology of a Linux system, supports defining a memory area, creates a ZRAM virtual device and supports an automatic compression function, namely, a write interface of the ZRAM virtual device is called to automatically compress data and then write the data into a memory. And after the read interface of the ZRAM virtual device is called to read out the data, the data is automatically decompressed and the decompressed data is returned.
(3) Distributed storage cluster (ceph cache tier)
In the same ceph distributed storage cluster, a resource pool (such as ssd-pool) is formed by high-speed disks, a resource pool (such as hdd-pool) is formed by low-speed disks, the ssd-pool can be set as a cache layer of the hdd-pool, and the read-write of the hdd-pool is accelerated. The cache may be configured as a read cache, a write cache, a read write cache, and various other cache policies, such as a cache flush policy.
With the continuous maturation of various software and hardware technologies of solid state disks (solid state drive, SSD), SSD disk capacity is also continuously increasing, and advantages are more and more obvious. HDD storage disks are mechanical disks that consume significantly more power than SSD disks. However, for historical reasons, there are still a large number of HDD storage disks in the cloud pool of the production environment for the cloud vendor.
The performance of the efficient cloud disk provided by the distributed storage system created based on HDD storage disks is relatively low. The efficient cloud disk is generally applied to scenes requiring large capacity and relatively low performance requirements. Currently, HDD storage disks can support a capacity of 20 terabytes (T). Most HDD storage disks have capacities between 10T and 20T.
For random writing, high capacity HDD storage disks are typically configured with a cache directory (media cache) and a flash-based nonvolatile cache (NVC). At random small data writing, the data can be buffered by media cache or NVC, and then the data is flushed from the buffer to the disk medium.
Specifically, the data is generally ordered in the cache, and then the disk medium is brushed in, so that the seek time of the magnetic head is reduced, and the random writing performance is improved to a certain extent. Therefore, for a large-capacity HDD storage disk, random write performance in different capacity areas is stable in power consumption variation.
For random reading, the HDD storage disk has no optimization mechanism, and the stored data is generally read directly from the HDD storage disk through the head swing arm. However, with the increase of capacity, the amplitude of the head swing arm is larger when the HDD storage disk is randomly read, and there is a problem that the HDD storage disk consumes too much power. For example, the power consumption of a random read in the 0-5T interval of a 10T HDD storage disk is much greater than the power consumption generated by a random read in the 0-1T interval.
In view of this, the method for creating a read cache layer provided in the present application first obtains the free rate of the memory space of the server in the cloud pool, which has the capability of creating the memory-optimized ZRAM virtual device. Then, at least one ZRAM virtual device is created in a target area of a memory space of the target server, and the idle rate of the target server is larger than or equal to a preset threshold value, which indicates that more idle parts exist in the memory of the target server, and the target server has the capability of providing a read cache for the HDD storage hard disk. Further, the at least one ZRAM virtual device is configured as a read cache layer of the HDD storage hard disk. Compared with the problem that the power consumption of the conventional HDD storage disk is too high, the technical scheme can create the read cache layer for the HDD storage disk, reduce the large-scale random read access to the HDD storage disk, and therefore effectively reduce the power consumption of the HDD storage disk.
The following describes embodiments of the present application in detail with reference to the drawings.
Fig. 2 is a schematic architecture diagram of a system for creating a read cache layer according to an embodiment of the present application. As shown in fig. 2, the system for creating a read cache layer includes: a first server 201, a second server cluster 202, and an HDD storage hard disk 203.
Wherein the first server 201 is connected to the second server cluster 202 via a communication link. The second server cluster 202 is connected to the HDD storage hard disk 203 via a communication link. The first server 201 is connected to the HDD storage hard disk 203 through a communication link. The communication link may be a wired communication link or a wireless communication link, which is not limited in this application.
It should be noted that the operating system (such as Linux kernel) of the servers in the second server cluster 202 has the capability of creating ZRAM virtual devices. The first server 201, the second server cluster 202, and the HDD storage hard disk 203 are resources in one cloud pool.
In a possible implementation manner, the first server 201 is configured to monitor a memory space of a server in the second server cluster 202, and set a memory that meets a preset condition as a read cache layer of the HDD storage hard disk 203, so as to reduce power consumption of the HDD storage hard disk 203 during random reading.
In one possible implementation, the second server cluster 202 includes a compute class server and a storage class server.
Specifically, the computing class server may be a server in a computing resource pool in a cloud pool. The storage class servers may be servers in a block storage pool, a file storage pool, and an object storage pool.
In a possible implementation manner, the HDD storage hard disk 203 is configured to store hot data in the read cache layer, and when an instruction for reading target data from the HDD storage hard disk 203 is received, if the target data is pre-stored in the read cache layer, the target data in the read cache layer can be directly read, so that reading of the data from the HDD storage hard disk 203 through the head swing arm is avoided, and thus power consumption of the HDD storage hard disk 203 during random reading is reduced.
If the target data is not pre-stored in the read cache layer, the target data needs to be read from the HDD storage hard disk 203 through the head swing arm.
When implemented in hardware, the various modules in the first server 201 may be integrated on the hardware structure of the creation means of the read cache layer as shown in fig. 3. Specifically, as shown in fig. 3, the basic hardware structure of the creation means of the read cache layer is introduced.
Fig. 3 is a schematic hardware structure of a device for creating a read cache layer according to an embodiment of the present application. As shown in fig. 3, the means for creating a read cache layer comprises at least one processor 301, a communication line 302, and at least one communication interface 304, and may further comprise a memory 303. The processor 301, the memory 303, and the communication interface 304 may be connected through a communication line 302.
Processor 301 may be a central processing unit (central processing unit, CPU), may be an integrated circuit (application specific integrated circuit, ASIC), or may be one or more integrated circuits configured to implement embodiments of the present application, such as: one or more digital signal processors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA).
Communication line 302 may include a path for communicating information between the above-described components.
The communication interface 304 is used to communicate with other devices or communication networks, and any transceiver-like device may be used, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 303 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, a compact disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to include or store the desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible design, the memory 303 may exist independent of the processor 301, i.e. the memory 303 may be a memory external to the processor 301, where the memory 303 may be connected to the processor 301 through a communication line 302, for storing execution instructions or application program codes, and the execution is controlled by the processor 301, to implement a method for creating a read cache layer provided in the embodiments described below. In yet another possible design, the memory 303 may be integrated with the processor 301, i.e., the memory 303 may be an internal memory of the processor 301, e.g., the memory 303 may be a cache, and may be used to temporarily store some data and instruction information, etc.
As one possible implementation, processor 301 may include one or more CPUs, such as CPU0 and CPU1 in fig. 3. As another possible implementation, the creating means of the read cache layer may include a plurality of processors, such as the processor 301 and the processor 307 in fig. 3. As yet another possible implementation, the means for creating a read cache layer may further comprise an output device 305 and an input device 306.
It should be noted that, the embodiments of the present application may refer to or refer to each other, for example, the same or similar steps, and the method embodiment, the system embodiment and the device embodiment may refer to each other, which is not limited.
Fig. 4 is a flowchart of a method for creating a read cache layer according to an embodiment of the present application, where the method may be applied to the apparatus for creating a read cache layer shown in fig. 3. As shown in fig. 4, the method includes the following S401 to S403.
S401, acquiring the idle rate of the memory space of each server in the cloud pool.
In one possible implementation, operating system information of all servers in the cloud pool is obtained for all servers. The servers that do not support ZRAM functionality by the operating system are deleted and servers that do support ZRAM functionality by the operating system are hung on a first list (e.g., an exclusive list). Then, the free rate of the memory space of each server in the first list is traversed.
Specifically, for each server, the ratio of the size of the memory in the idle state in the server to the total space of the memory of the server is used as the idle rate of the memory space of the server.
In one example, the memory space of the server is taken as 100G. If the memory space is occupied by 30G, the memory of the server in the idle state is 70G, which indicates that the idle rate of the memory space is 70%.
S402, creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being larger than or equal to a preset threshold value.
The preset threshold value can be 70%, and the preset threshold value can be set according to practical situations, so that the method is not limited.
When the idle rate of the memory space of the target server is greater than or equal to the preset threshold, the idle part of the memory of the target server is more, and the memory of the target server can provide reading and caching for the HDD storage hard disk.
In one example, target servers in the first list having a free rate greater than or equal to a preset threshold are hung on a second list (e.g., shared_list), and target servers in the first list having a free rate less than the preset threshold remain unchanged and remain in the first list. Traversing each target server in the second list to create at least one ZRAM virtual device in a target area of the memory space of the target server.
In one possible implementation, the process of creating at least one ZRAM virtual device may include the following steps 21-22.
And step 21, dividing the target area of the memory space according to the preset memory size to obtain at least one sub-target area.
The target area may be 30% of the memory space. For example, if the memory space is 100G, the target area may be 30G. The predetermined memory may be 10G.
In one example, a target area with a memory size of 30G is divided according to a granularity of 10G, so as to obtain 3 sub-target areas.
Step 22, for each sub-target area, creating a ZRAM virtual device in each sub-target area.
In one example, one ZRAM virtual device is created in each of the 3 sub-target areas.
S403, setting at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk.
In a possible implementation, the process of setting the read cache layer may include the following steps 31-33.
And step 31, adding at least one ZRAM virtual device into the HDD storage hard disk.
In one example, a ZRAM virtual device is added to the HDD storage hard disk as an Object Storage Device (OSD) to make the HDD storage hard disk possess the right to use the ZRAM virtual device.
Step 32, creating a ZRAM storage memory based on the at least one ZRAM virtual device.
In one example, a plurality of ZRAM virtual devices added to HDD storage hard disk are taken as ZRAM storage memory (zram_pool).
Optionally, the disaster-tolerant domain of zram_pool is OSD level.
And step 33, setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
In one example, the zram_pool storage pool is set as a read cache layer (cache read-only) of the HDD storage hard disk.
It should be noted that, the zram_pool is a general storage pool, and not only can make read cache layer acceleration for the HDD storage hard disk, but also can independently provide block, file and object storage services for the HDD storage hard disk. Therefore, other services with cache requirements in the cloud pool can apply for the cache space from the zram_pool storage pool. However, before the zram_pool is used, other services with cache requirements in the cloud pool need to acquire the memory space composition of the zram_pool in advance, and the application can be applied for use under the condition that the zram_pool meets the service scene. The zram_pool pool provides a global, flexible option for the memory level cache requirements of all services of the cloud pool.
Based on the above technical scheme, the method for creating a read cache layer firstly obtains the idle rate of the memory space of a server with the capacity of creating a memory optimization ZRAM virtual device in a cloud pool. Then, at least one ZRAM virtual device is created in a target area of a memory space of the target server, and the idle rate of the target server is larger than or equal to a preset threshold value, which indicates that more idle parts exist in the memory of the target server, and the target server has the capability of providing a read cache for the HDD storage hard disk. Further, the at least one ZRAM virtual device is configured as a read cache layer of the HDD storage hard disk. Compared with the problem that the power consumption of the conventional HDD storage disk is too high, the technical scheme can create the read cache layer for the HDD storage disk, reduce the large-scale random read access to the HDD storage disk, and therefore effectively reduce the power consumption of the HDD storage disk.
As a possible embodiment of the present application, in connection with fig. 4, as shown in fig. 5, in the method for creating a read cache layer, the process of deleting the created ZRAM virtual device by the creation means of the read cache layer may be implemented by the following S501-S502.
S501, based on a preset period, monitoring the idle rate of the memory space of the target server.
The preset period can be 10 minutes, and can be set according to actual requirements, which is not limited in the application.
In a possible implementation manner, the target server with the idle rate greater than or equal to the preset threshold value is hung on the second list. And monitoring the memory idle rate of each server in the second list according to a preset period.
It can be appreciated that after the ZRAM virtual device is created, as the target server's own application process occupies memory, the memory idle rate of the target server will gradually decrease. In contrast, as the self-application process in the target server ends, the memory of the target server is released, i.e. the memory idle rate of the target server is increased.
S502, if the idle rate of the memory space of the target server is smaller than a preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
It should be noted that, when the idle rate of the memory space of the target server is smaller than the preset threshold after the ZRAM virtual device is created, it is indicated that the memory space of the target server is not enough in self-demand at this time, and the shared target area needs to be recovered, so as to achieve the purpose of maintaining the self-application process and strengthen the self-reliability.
In a possible implementation manner, the process of deleting the ZRAM virtual device corresponding to the target area of the target server may be implemented by the following steps 41 to 42.
Step 41, hanging the target server into the target list.
Wherein the servers in the target list wait to reclaim the target area.
In one example, after the ZRAM virtual device is created, if the free rate of the memory space of the target server is less than 70%, the target server is moved from the second list to the target list (e.g., the shrning_list), and the created ZRAM virtual device is waiting to be deleted, and the target area is reclaimed.
And 42, sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
In one example, for each target server in the target list, the ZRAM virtual device corresponding to each target server is traversed, and the ZRAM virtual devices are kicked out of the cluster in turn.
Specifically, after one ZRAM virtual device is kicked out, the reconstruction is triggered, and after the reconstruction is completed, a second ZRAM virtual device is kicked out until all ZRAM virtual devices shared by the target server are kicked out, and the target area is released.
Further, the released target server is hung on the first list, and the above steps S401 to S403 are repeatedly executed, so as to achieve the purpose of providing the read cache layer for the HDD storage hard disk again.
It should be noted that since all ZRAM devices are memory media, the rebuilding and kicking speeds are very fast.
Based on the technical scheme, the idle rate of the memory space of the target server is monitored based on a preset period. And if the idle rate of the memory space of the target server is smaller than a preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server. According to the technical scheme, the target area shared by the target server is recovered when the memory space of the target server is insufficient, so that the purpose of maintaining the application process of the target server is achieved, and the reliability of the target server is enhanced.
The embodiment of the present application may divide the function module or the function unit of the creating device of the read cache layer according to the above method example, for example, each function module or each function unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of the modules or units in the embodiments of the present application is merely a logic function division, and other division manners may be implemented in practice.
As shown in fig. 6, a schematic structural diagram of a device 60 for creating a read cache layer according to an embodiment of the present application is provided, where the device includes: communication unit 601 and processing unit 602.
The communication unit 601 is configured to obtain an idle rate of a memory space of each server in the cloud pool, where an operating system of the server has a capability of creating a memory-optimized ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk.
A processing unit 602, configured to create, for a target server having an idle rate greater than or equal to a preset threshold, at least one ZRAM virtual device in a target area of a memory space of the target server.
The processing unit 602 is further configured to set at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk.
The processing unit 602 is specifically configured to: dividing a target area of the memory space according to a preset memory size to obtain at least one sub-target area; for each sub-target area, one ZRAM virtual device is created in each sub-target area.
The processing unit 602 is specifically configured to: adding at least one ZRAM virtual device into the HDD storage hard disk; creating a ZRAM storage memory based on the at least one ZRAM virtual device; and setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
The processing unit 602 is further configured to: based on a preset period, monitoring the idle rate of the memory space of the target server; and if the idle rate of the memory space of the target server is smaller than a preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
The processing unit 602 is specifically configured to: hanging the target server into a target list; the server in the target list waits for the recovery of the target area; and sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
In a possible implementation manner, the creating device 60 for a read cache layer may further include a storage unit 603 (shown in a dashed box in fig. 6), where the storage unit 603 stores a program or an instruction, and when the processing unit 602 executes the program or the instruction, the creating device 60 for a read cache layer may perform the method for creating a read cache layer according to the foregoing method embodiment.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
Embodiments of the present application provide a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the method of creating a read cache layer in the method embodiments described above.
The embodiment of the application also provides a computer readable storage medium, in which instructions are stored, when the instructions run on a computer, the computer is caused to execute the method for creating the read cache layer in the method flow shown in the method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a register, a hard disk, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuit, ASIC). In the context of the present application, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the creating apparatus, the computer readable storage medium, and the computer program product of the read cache layer in the embodiments of the present application may be applied to the above-mentioned method, the technical effects that can be obtained by the creating apparatus, the computer readable storage medium, and the computer program product may also refer to the above-mentioned method embodiments, and the embodiments of the present application are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, indirect coupling or communication connection of devices or units, electrical, mechanical, or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for creating a read cache layer, the method comprising:
acquiring the idle rate of the memory space of each server in the cloud pool, wherein an operating system of the server has the capacity of creating a memory optimization ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk;
creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being greater than or equal to a preset threshold value;
and setting the at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk.
2. The method of claim 1, wherein creating at least one ZRAM virtual device in a target area of a memory space of the target server comprises:
dividing the target area of the memory space according to a preset memory size to obtain at least one sub-target area;
for each sub-target area, creating a ZRAM virtual device in said each sub-target area.
3. The method of claim 1, wherein said setting the at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk comprises:
adding the at least one ZRAM virtual device to the HDD storage hard disk;
creating a ZRAM storage memory based on the at least one ZRAM virtual device;
and setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
4. The method according to claim 1, wherein the method further comprises:
based on a preset period, monitoring the idle rate of the memory space of the target server;
and if the idle rate of the memory space of the target server is smaller than the preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
5. The method of claim 4, wherein deleting the ZRAM virtual device corresponding to the target area of the target server comprises:
hanging the target server into a target list; the server in the target list waits for recycling the target area;
and sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
6. A device for creating a read cache layer, which is characterized by comprising a communication unit and a processing unit;
the communication unit is used for acquiring the idle rate of the memory space of each server in the cloud pool, and an operating system of the server has the capacity of creating a memory optimization ZRAM virtual device; the cloud pool comprises a plurality of servers and a traditional hard disk HDD storage hard disk;
the processing unit is used for creating at least one ZRAM virtual device in a target area of a memory space of a target server aiming at the target server with the idle rate being greater than or equal to a preset threshold value;
the processing unit is further configured to set the at least one ZRAM virtual device as a read cache layer of the HDD storage hard disk.
7. The apparatus according to claim 6, wherein the processing unit is specifically configured to:
dividing the target area of the memory space according to a preset memory size to obtain at least one sub-target area;
for each sub-target area, creating a ZRAM virtual device in said each sub-target area.
8. The apparatus according to claim 6, wherein the processing unit is specifically configured to:
adding the at least one ZRAM virtual device to the HDD storage hard disk;
creating a ZRAM storage memory based on the at least one ZRAM virtual device;
and setting the ZRAM storage memory as a read cache layer of the HDD storage hard disk.
9. The apparatus of claim 6, wherein the processing unit is further configured to:
based on a preset period, monitoring the idle rate of the memory space of the target server;
and if the idle rate of the memory space of the target server is smaller than the preset threshold value after the ZRAM virtual device is created, deleting the ZRAM virtual device corresponding to the target area of the target server.
10. The apparatus according to claim 9, wherein the processing unit is specifically configured to:
hanging the target server into a target list; the server in the target list waits for recycling the target area;
and sequentially eliminating at least one ZRAM virtual device of the target server in the target list from the HDD storage hard disk.
11. A device for creating a read cache layer, comprising: a processor and a communication interface; the communication interface is coupled to the processor for running a computer program or instructions to implement the method of creating a read cache layer as claimed in any one of claims 1-5.
12. A computer readable storage medium having instructions stored therein, which when executed by a computer, perform the method of creating a read cache layer according to any one of claims 1-5.
CN202311705410.1A 2023-12-12 2023-12-12 Method and device for creating read cache layer and storage medium Pending CN117707989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311705410.1A CN117707989A (en) 2023-12-12 2023-12-12 Method and device for creating read cache layer and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311705410.1A CN117707989A (en) 2023-12-12 2023-12-12 Method and device for creating read cache layer and storage medium

Publications (1)

Publication Number Publication Date
CN117707989A true CN117707989A (en) 2024-03-15

Family

ID=90149074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311705410.1A Pending CN117707989A (en) 2023-12-12 2023-12-12 Method and device for creating read cache layer and storage medium

Country Status (1)

Country Link
CN (1) CN117707989A (en)

Similar Documents

Publication Publication Date Title
TWI534702B (en) Apparatus to manage efficient data migration between tiers
US20220137849A1 (en) Fragment Management Method and Fragment Management Apparatus
US8782335B2 (en) Latency reduction associated with a response to a request in a storage system
US10417137B2 (en) Flushing pages from solid-state storage device
CN109213696B (en) Method and apparatus for cache management
EP2784683B1 (en) Storage control program, storage control method, storage system and hierarchy control apparatus thereof
JP2013509658A (en) Allocation of storage memory based on future usage estimates
US10831374B2 (en) Minimizing seek times in a hierarchical storage management (HSM) system
CN115576505A (en) Data storage method, device and equipment and readable storage medium
CN115794669A (en) Method, device and related equipment for expanding memory
JP6680069B2 (en) Storage control device, storage system, and storage device control program
US20150100663A1 (en) Computer system, cache management method, and computer
CN112015343B (en) Cache space management method and device of storage volume and electronic equipment
CN115543187A (en) Data processing method and equipment
WO2023020136A1 (en) Data storage method and apparatus in storage system
US10606501B2 (en) Management of paging in compressed storage
CN108052296B (en) Data reading method and device and computer storage medium
US9384135B2 (en) System and method of caching hinted data
CN110750221A (en) Volume cloning method, volume cloning device, electronic equipment and machine-readable storage medium
CN117707989A (en) Method and device for creating read cache layer and storage medium
EP4321981A1 (en) Data processing method and apparatus
WO2016029481A1 (en) Method and device for isolating disk regions
US10346193B1 (en) Efficient placement of virtual machines based on cache hit ratio
CN111124275A (en) Monitoring service optimization method and device of distributed block storage system
US11366614B2 (en) Storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination