CN118069575A - Storage space management method and management equipment - Google Patents
Storage space management method and management equipment Download PDFInfo
- Publication number
- CN118069575A CN118069575A CN202410048188.0A CN202410048188A CN118069575A CN 118069575 A CN118069575 A CN 118069575A CN 202410048188 A CN202410048188 A CN 202410048188A CN 118069575 A CN118069575 A CN 118069575A
- Authority
- CN
- China
- Prior art keywords
- storage device
- storage
- cxl
- management
- computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 1037
- 238000007726 management method Methods 0.000 title claims abstract description 282
- 238000000034 method Methods 0.000 claims abstract description 68
- 230000015654 memory Effects 0.000 claims description 183
- 230000005012 migration Effects 0.000 claims description 135
- 238000013508 migration Methods 0.000 claims description 135
- 238000012163 sequencing technique Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 16
- 238000012545 processing Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000004744 fabric Substances 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4221—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0625—Power saving in storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a storage space management method and management equipment, wherein the method is applied to the management equipment, the management equipment firstly determines a first storage device and a second storage device in a computing system, then transfers data in the second storage device to the first storage device, and then controls the second storage device to be switched from an active state to an idle state, so that more storage devices in the computing system can be in the idle state, and the power consumption of the computing system is reduced; meanwhile, the management equipment needs to manage less storage equipment in an active state, and can save storage management cost.
Description
Technical Field
The application relates to the technical field of computing equipment, in particular to a storage space management method and management equipment.
Background
The computing fast link (computer express link, CXL) protocol is an open interconnect protocol that enables high-speed and efficient interconnection between a central processing unit (central processing unit, CPU) and a graphics processor (graphic processing unit, GPU), field programmable gate array (field programmable GATE ARRAY, FPGA), or other accelerator, thereby meeting the requirements of high-performance heterogeneous computing. The CXL protocol supports the CPU of the computing device to access the additional memory of the device through the memory semantics without occupying the memory slot of the computing device.
After the CXL protocol is deduced, the CXL protocol is applied to the memory expansion scene in a dispute manner in the industry, and the memory capacity of computing equipment such as a server is expanded.
However, the CXL protocol is employed to provide a sufficiently large shared memory pool for a computing system while also increasing the power consumption and storage management costs of the computing system.
Disclosure of Invention
The embodiment of the application provides a storage space management method and management equipment, which can reduce the power consumption and storage management cost of a computing system.
The first aspect of the embodiment of the application provides a storage space management method, which is applied to management equipment, wherein the management equipment is used for managing CXL storage pools and computing equipment, the computing equipment and the CXL storage pools both comprise storage equipment, and the storage equipment comprises a first storage equipment and a second storage equipment; the method comprises the following steps:
determining the first storage device and the second storage device, wherein the target parameter of the first storage device is larger than the target parameter of the second storage device, and the residual capacity of the first storage device is larger than or equal to the used space capacity of the second storage device, and the target parameter comprises the utilization rate, the specification capacity, the ratio of the specification capacity to the running power consumption, or the priority; and migrating the data in the second storage device to the first storage device, and switching the second storage device from an active state to an idle state so as to reduce the power consumption of the second storage device.
Wherein, the residual capacity of the first storage device is not 0, and the utilization rate of the second storage device is not 0.
When the number of first storage devices is one, the remaining capacity of the first storage device is the remaining capacity of the one first storage device, and when the number of first storage devices is a plurality of first storage devices, the remaining capacity of the first storage device is the sum of the remaining capacities of the plurality of first storage devices.
In the embodiment of the application, the data in the second storage device is migrated to the first storage device, and then the second storage device is controlled to be switched from the active state to the idle state, so that more storage devices can be in the idle state, and the power consumption of the computing system is reduced; meanwhile, the management equipment needs to manage less storage equipment in an active state, and can save storage management cost.
In the embodiment of the application, the storage device with smaller target parameters is used as the second storage device, and the storage device with larger target parameters is used as the first storage device, so that more beneficial effects can be obtained.
For example, when the target parameter is the utilization rate, the management device migrates the data of the second storage device with the lower utilization rate to the first storage device with the higher utilization rate, the migration amount is smaller, and the computing resource can be saved.
For example, when the target parameter is the specification capacity and the specification capacities of the respective storage devices are not completely the same, the data can be concentrated in fewer storage devices, thereby further saving the storage management cost.
For example, when the target parameter is a ratio of the specification capacity to the running power consumption, the data in the computing system may be preferentially stored in the storage device with a higher energy efficiency ratio, thereby reducing the power consumption of the computing system.
For example, when the target parameter is the inverse of the operating power consumption, the data in the computing system may be preferentially concentrated in the storage device with the smaller power consumption, thereby reducing the power consumption of the computing system.
In one possible implementation, the determining the first storage device and the second storage device includes: determining the storage device with the smallest target parameter as the second storage device; the first storage device is determined from the storage devices other than the second storage device.
In the embodiment of the application, the second storage device is firstly determined, and then the first storage device is determined according to the used space capacity of the second storage device, so that the storage device with the smallest target parameter can be preferentially controlled to enter the idle state, namely, the storage device is switched from the active state to the idle state, and the power consumption of the computing system can be rapidly reduced.
In one possible implementation, the determining the first storage device and the second storage device includes: sorting the storage devices according to the target parameters from large to small; determining the first M storage devices in the sequencing result as the first storage device, wherein M is a positive integer greater than or equal to 1; from the storage devices other than the first storage device, a storage device whose used space capacity is less than or equal to a sum of remaining capacities of the M first storage devices is determined as the second storage device.
In the embodiment of the application, the data are stored in the first M storage devices in the target parameter sequencing result preferentially, so that the beneficial effects corresponding to the target parameters can be maximized.
In one possible implementation, the target parameter is the utilization; the determining the first storage device and the second storage device includes: determining that the storage device with the utilization rate smaller than or equal to a first threshold value is the second storage device, and marking the storage device with the utilization rate larger than the first threshold value as an active storage device; the first storage device is determined from the active storage devices.
In the embodiment of the application, the first threshold value is set to determine the second storage equipment, so that the quantity of the second storage equipment and the corresponding data quantity can be controlled, the migrated data quantity can be controlled, and the computing resources of the management equipment can be better scheduled.
In one possible implementation, the storage devices in the CXL storage pool have the same specification capacity, and the first storage device and the second storage device are both storage devices in the CXL storage pool; the determining the first storage device and the second storage device includes: obtaining the sum of the used space capacity of all the storage devices in the CXL storage pool; dividing the sum of the used space capacities by the specification capacity to obtain a calculation result, and rounding up based on the calculation result to obtain a minimum number K; and determining K storage devices in the CXL storage pool as the first storage device, and determining all storage devices except the first storage device in the CXL storage pool as the second storage device.
In the embodiment of the application, the minimum number K is directly calculated, and then the data in the CXL storage pool is concentrated in the first storage device in the K CXL storage pools, so that the first storage device and the second storage device are not required to be determined repeatedly and circularly, and the preset termination condition can be reached after the first storage device and the second storage device are determined once and the corresponding data migration is completed, thereby saving the calculation resources.
In one possible implementation, the determining that K of the storage devices in the CXL storage pool are the first storage device includes: acquiring the utilization rate of each storage device in the CXL storage pool; and sequencing the storage devices from high to low according to the utilization rate, and acquiring the first K storage devices in the sequencing result as the first storage device.
In the embodiment of the application, the first K storage devices in the utilization rate sequencing result are selected as the first storage devices, and the storage devices with the rear utilization rate sequencing result are selected as the second storage devices, so that the sum of the used space capacities of the second storage devices is the minimum value, the data quantity required to be migrated is less, and the calculation resources can be saved.
In one possible implementation, before the determining the first storage device and the second storage device, or after the switching the second storage device from the active state to the idle state, the method further comprises: determining whether a triggering condition of data migration is met, wherein the triggering condition comprises that the power consumption and the storage management cost of a computing system do not reach the minimum, and/or the average utilization rate of storage equipment in an active state in the computing system does not reach a third threshold, and the computing system comprises the computing equipment, the management equipment and the CXL storage pool; if so, triggering the step of determining the first storage device and the second storage device.
In the embodiment of the application, by setting the trigger condition, the management device can circularly execute the steps of determining the first storage device, the second storage device and the data migration according to the trigger condition until the trigger condition is not satisfied, that is, the termination condition is satisfied. In this way, the beneficial effects expected by the user can be achieved as much as possible.
In one possible implementation, after the switching the second storage device from the active state to the idle state, the method further comprises: determining whether a triggering condition of data migration is met, wherein the triggering condition comprises that the utilization rate of any active storage device is smaller than or equal to a second threshold value; if yes, determining that the active storage device with the utilization rate smaller than or equal to the second threshold value is the second storage device, and deleting the active storage device mark of the second storage device; triggering the slave active storage device to determine the first storage device.
In the embodiment of the application, the data in the second storage device with lower utilization rate is migrated to the first storage device with higher utilization rate by two times, so that the management device can be controlled not to migrate a large amount of data in a short time in the process of managing the storage space of the computing device, and the performance of the management device is ensured.
In one possible implementation, the first storage device and the second storage device are both storage devices in the CXL storage pool.
In the embodiment of the application, the local memory of the computing equipment is controlled not to participate in data migration, so that the service operation of the computing equipment can be ensured.
In one possible implementation, at least one of the first storage device and the second storage device is a storage device of the computing device.
In the embodiment of the application, the local memory of the computing equipment is controlled to participate in data migration, so that the demand of time delay sensitive business on read-write time delay can be better met.
In one possible implementation, after the migrating the data in the second storage device to the first storage device, the method further includes: receiving a storage space allocation request sent by the computing equipment; in response to the storage allocation request, storage space of the storage device in an active state is allocated for the computing device.
In the embodiment of the application, the storage space of the storage device in the active state is distributed to the computing device, so that the storage device in the idle state does not participate in memory distribution as much as possible, and is kept in the idle state with low power consumption.
In one possible implementation, the priority includes pre-specified and non-pre-specified, the first storage device is the pre-specified storage device, and the second storage device is the non-pre-specified storage device other than the first storage device.
In the embodiment of the application, the user can directly control the priority storage equipment in the computing system through the preset priority and intensively store the data in the priority storage equipment, thereby better meeting the requirements of the user.
The second aspect of the embodiment of the present application also provides a management device, including: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause a management device to implement the method of any of the first aspects.
A third aspect of an embodiment of the present application also provides a computing system, the computing system comprising: the device comprises a computing device, a management device and a CXL storage pool, wherein the CXL storage pool and the computing device both comprise storage devices, the management device is respectively connected with the computing device and the CXL storage pool, and the management device is used for distributing and managing storage space in the storage devices; the management device is configured to implement the method according to any of the first aspects.
It should be appreciated that the benefits of the various aspects described above may be referred to with respect to one another.
Drawings
Fig. 1a is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 1b is a schematic diagram of another application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a hardware architecture of a computing system according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for managing storage space according to the present embodiment;
FIG. 4 is a flowchart of another method for managing storage space according to the present embodiment;
FIG. 5 is a graph showing the memory device utilization before and after data migration according to the present embodiment;
FIG. 6 is a graph showing the memory device utilization before and after another data migration provided by the present embodiment;
FIG. 7 is a graph showing the memory device utilization before and after another data migration provided by the present embodiment;
fig. 8 is a schematic structural diagram of a management device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will now be described with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the present application. As one of ordinary skill in the art can know, with the development of technology and the appearance of new scenes, the technical scheme provided by the embodiment of the application is also applicable to similar technical problems.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Some term concepts related to the embodiments of the present application are explained below.
(1) High-speed computing interconnect (compute express link, CXL) protocol
The CXL protocol is an open industry standard, peripheral component interconnect express data transfer bus (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) protocol implemented protocol for high bandwidth low latency device interconnect. The CXL protocol may be used to interface with central processing units (central processing unit, CPU), accelerators, memory caches, and intelligent network cards, among other types of devices. The CXL protocol can be used in scenarios such as artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) and high performance computing.
(2) Structure management (fabric management or fabric manager or fabric memory, FM)
FM is a management software for managing and allocating memory space in a CXL memory pool to provide a computing device with the required memory space. The form of the hardware entity responsible for running FM includes many cases. For example, FM is run by CXL switch chip (CXL switch), such as by CXL memory expansion card, such as by CXL controller; for another example, FM is run by the CPU; for another example, the substrate controller (baseboard manager controller, BMC) runs FM, and the hardware device running FM is collectively referred to as a management device, where the management device may include management software and a hardware device running the management software. For example, the management device may be a CXL controller, a CXL switch chip, or other control chip, or may be a stand-alone server or other terminal device.
(3) Computing device
The computing device may be a server, or may be a tablet, personal computer, cell phone, or other terminal device. The computing device may include components such as a CPU, south bridge chip, memory, PCIe devices, and their peripheral circuitry. For example, a computing device may include one or more CPUs. The computing device may also include an accelerator, which may be, for example, a tensor processing unit (tensor processing unit, TPU), an image processing unit (graphics processing unit, GPU), a neural network processing unit (neural processing unit, NPU), a data processing unit (data processing unit, DPU), or an intelligent network interface card (smart network INTERFACE CARD, smart NIC), etc. The computing device may also be a virtual computing device. In this embodiment, the computing device supports the CXL protocol, by which the computing device can communicate with the storage device.
(4) Storage device
The storage device is used to provide storage space for the computing device to store data. The storage device may be a memory or a hard disk. The memory may be a random access memory (ram) or a Read Only Memory (ROM). For example, the random access memory is a dynamic random access memory (dynamic random access memory, DRAM) or a storage class memory (storage class memory, SCM). The memory may also include other random access memory, such as static random access memory (static random access memory, SRAM), and the like. Read-only memory is, for example, a programmable read-only memory (programmable read only memory, PROM) or an erasable programmable read-only memory (erasable programmable read only memory, EPROM) or the like.
By way of example, the memory device may be a memory stick, which may be a dual in-line memory module or a dual in-line memory module (DIMM) memory stick. For example, a DIMM memory bank is a module that includes DRAM. Illustratively, the storage device is a memory expansion card. Illustratively, the storage device is a memory disk having a PCIe interface. Or the storage device is a hard disk. For example, the storage device is a solid state disk (solid STATE DRIVE, SSD) or a mechanical hard disk (HARD DISK DRIVE, HDD). The present embodiment will be described taking a memory device as a memory bank as an example.
Embodiments of the present application relate to an application scenario for a CXL storage pool, which may include storage space provided by a plurality of storage devices. To distinguish between different storage devices, the description of "first storage device," second storage device, "etc. is used to distinguish between multiple different storage devices.
(5) Residual capacity
The remaining capacity may also be referred to as unallocated capacity, free capacity, or available capacity. The remaining capacity refers to the capacity of memory space in the storage device that has not been allocated for use by the computing device. The remaining capacity is, for example, the capacity of free memory space in the storage device. For example, the storage device has a standard capacity of 100G, and if 20G of memory space has been allocated to the computing device in the memory space of 100G, the remaining capacity of the storage device is 80G.
(6) Specification capacity
The gauge capacity may also be referred to as total capacity, nominal capacity, or rated capacity. The specification capacity is, for example, a theoretical maximum capacity set for the storage device by a manufacturer of the storage device during production of the storage device. For example, the storage device has a standard capacity of 32G, 64G, 128G, 256G, or the like.
For ease of understanding, the CXL storage pooling technique is first described below.
In the architecture of a conventional computing device, memory banks are directly connected to the computing device through a bus, and the computing device can only use the memory space provided by the memory banks directly connected to itself. When a CPU in a computing device wants to connect more memory banks to expand memory capacity and bandwidth, it is limited by the number of CPU pins.
Aiming at the problem that the memory capacity and the bandwidth are limited by CPU pins, the method can be solved by establishing a memory pool system through CXL protocol. For example, a large number of memory banks are grouped into a memory pool, and then the memory pool is connected to CPUs in a plurality of computing devices using the CXL protocol, thereby allocating memory space in the memory pool to the plurality of CPUs for use as needed.
Referring to fig. 1a, fig. 1a shows a scenario in which interconnection between multiple computing devices and storage pools is implemented based on a CXL controller, where memory banks 1 to 6 form the storage pool, a second interface of the CXL controller is connected to the memory banks 1 to 6, respectively, and a first interface of the CXL controller is connected to the computing device 1 and the computing device 2, respectively. FM management software can be run on the CXL controller, and the CXL controller manages the allocation of the memory space of the memory banks 1 to 6 based on the FM management software. The CXL controller may allocate the memory space of memory banks 1 to 6 to either computing device 1 or computing device 2 for use such that computing device 1 and computing device 2 can share the memory space of 6 memory banks. Because each computing device connected with the CXL controller can share the memory space of all memory strips connected with the CXL controller, the problem that the memory capacity and the bandwidth are limited by CPU pins in the computing device is solved to a certain extent.
Referring also to fig. 1b, fig. 1b illustrates a scenario in which multiple computing devices and storage pools are interconnected based on a CXL switch. FM management software may also be run on the CXL switch to manage and allocate memory space in the memory pool. Furthermore, a CXL switch can provide more interfaces than a CXL controller, the CXL switch can connect with more computing devices and storage pools such that more computing devices can share storage space in the storage pools.
The storage pool implemented based on the CXL protocol (hereinafter referred to as the CXL storage pool) is made up of memory banks, and the CXL storage pool may also be made up of other types of storage devices. For example, the CXL storage pool may include a hard disk such as an SSD or an HDD. For example, the CXL switch allocates storage space in the SSD for use by the computing device. Furthermore, the storage capacity of each storage device in the CXL storage pool may be the same; the storage capacity of each storage device in the CXL storage pool may also be different; the type of each storage device in the CXL storage pool may be the same, e.g., each storage device in the CXL storage pool is a DIMM memory bank. Or the type of each storage device in the CXL storage pool may be different. For example, a portion of the storage devices in the CXL storage pool are of the DIMM memory bank type and another portion of the storage devices are of the hard disk type.
Furthermore, the CXL controller or CXL switch may allocate memory in the CXL memory pool to a CPU in the computing device, or may allocate memory in the CXL memory pool to other hardware entities, such as to an accelerator or GPU.
Furthermore, the CXL controller or CXL switch running FM management software may also incorporate the local memory of the connected computing device into the management and allocation.
The embodiment of the application can be applied to a scene of realizing memory expansion of a computing system based on CXL protocol as shown in FIG. 1a or FIG. 1b, and is particularly used for managing the storage space of the computing system. The embodiment of the application can be used for managing data migration in the storage space of the CXL storage pool and also can be used for managing data migration between the storage space of the CXL storage pool and the storage space of the local memory of the computing device.
Referring to fig. 2, fig. 2 is a schematic diagram of a hardware architecture of a computing system 10 according to an embodiment of the application. As shown in fig. 2, computing system 10 may include n computing devices, a management device, and a CXL storage pool including m storage devices, n being a positive integer greater than or equal to 1, m being a positive integer greater than or equal to 1.
Computing devices 1 through n may be used to process business data, and computing devices 1 through n may store business data using storage space provided by the CXL storage pool. For further explanation of the computing device, reference may be made to (3) in the above term conceptual section.
The management device is a hardware entity that runs FM. The management device may be, for example, a CXL controller as shown in fig. 1a or a CXL switch as shown in fig. 1b, and in the specific example of fig. 2, the management device is a computing device, such as a server or a terminal device, for example, that runs FM, other than the computing devices 1 to n. The management device may also be a CXL memory expansion card, such as a CXL memory expansion card.
The computing devices 1to n and the management device each include a local memory, namely, local memories 1to n+1. The management device may be configured to manage the storage spaces of the local memories 1to n+1 and the storage spaces of the storage devices 1to m; the method can be used for responding to the storage space allocation request of the computing devices 1to n and allocating the storage space of the storage devices (comprising the local memories 1to n+1 and the storage devices 1to m) for requesters in the computing devices 1to n; and the method can also be used for migrating the data in the storage space of the storage device in the corresponding storage device or local memory when the data migration condition is met.
When the management device is a computing device, the computing devices 1 to n may be connected to the CXL storage pool through the management device, or may be connected to the management device and the CXL storage pool through buses, respectively, as shown in fig. 2.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, PCIe bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 2, but not only one bus or one type of bus.
It is to be understood that the computing system 10 shown in FIG. 2 is by way of example only and not limitation, and that the computing system 10 may also be the system shown in FIG. 1a or the system shown in FIG. 1 b.
In a system like that shown in fig. 1a or that shown in fig. 1b, computing devices 1 through n are connected to a CXL storage pool through a management device (CXL controller or CXL switch), at which point the management device enables communication with the n computing devices through a first interface, which may include, but is not limited to, a network interface card, transceiver module, or the like. The first interface may be one interface or a combination of multiple interfaces, and the number of the first interfaces is not limited in the embodiment of the present application.
At this time, the management device connects the storage devices 1 to m in the CXL storage pool through the second interface. In one possible implementation, the second interface is a PCIe interface, each of the m storage devices in the CXL storage pool includes a PCIe interface, and the PCIe interface of the management device is connected to the m storage devices in the CXL storage pool through a PCIe bus. The second interface may be one interface, or may be a combination including a plurality of interfaces, and the number of the second interfaces is not limited in this embodiment.
The CXL storage pool is a storage device independent of computing devices 1 through n; the management device may be independent of the server in which the computing devices 1 to n are deployed and the CXL storage pool, e.g., the management device may be another computing device or a terminal device.
The management device may also be integrated with each storage device in the CXL storage pool in the same device. For example, the management device may be integrated with each storage device in the CXL storage pool in a storage server as a storage pool.
At present, the management scheme of the management device on the storage device in the computing system 10 is mainly based on data migration of data access heat and service correlation, specifically migration of data which is frequently accessed by the computing device and is strongly related to service of the computing device to a storage device which is closer to the computing device, and migration of data which is relatively lower in access frequency and is related to service of the computing device to a storage device which is further away from the computing device, so as to improve the efficiency of data access of the computing device. However, the memory requirements of the computing system are dynamic, and the management scheme will enable more storage devices to be in an active state, and the management device needs to manage the storage devices in the active state, so the management scheme may result in an increase in memory power consumption and management cost of the computing system. For example, the power consumption of an 8 GB-sized DDR3 1066 memory bank in an active state is 15W, and the power consumption in an idle state is 4W; when the CXL memory pool includes tens of such memory banks, this management scheme will result in very high power consumption, which is not beneficial for environmental protection and energy saving.
Based on the above, the embodiment of the application provides a storage space management method, management equipment and a computing system, which can reduce the memory power consumption and management cost of the computing system.
Referring to fig. 3, fig. 3 is a flowchart of a storage space management method according to the present embodiment, as shown in fig. 3, the method includes steps 301 and 302.
The method of fig. 3 may be applied to the system architecture of fig. 1 a. For example, the management device in the method shown in fig. 3 is the CXL controller in fig. 1a, the computing device in the method shown in fig. 3 is the computing device 1 or the computing device 2 in fig. 1a, and the CXL storage pool in the method shown in fig. 3 is memories 1 to 6 in fig. 1 a.
The method of fig. 3 may also be applied to the system architecture of fig. 1 b. For example, the management device in the method shown in fig. 3 may be the CXL switch in fig. 1b, the computing device in the method shown in fig. 3 may be computing device 1 or computing device 2 in fig. 1b, and the CXL storage pool in the method shown in fig. 3 may be the CXL storage pool in fig. 1 b.
The method of fig. 3 may also be applied to the system architecture of fig. 2. For example, the management device in the method shown in fig. 3 is the management device in fig. 2, the computing device in the method shown in fig. 3 is any one of the n computing devices in fig. 2, and the CXL storage pool in the method shown in fig. 3 is the CXL storage pool in fig. 2.
Each computing device and/or CXL storage pool in the computing system includes a storage device including a first storage device and a second storage device. It will be appreciated that the storage device of the computing device, i.e., the local memory of the computing device in the system architecture shown in fig. 2.
Step 301, the management device determines a first storage device and a second storage device.
Wherein, in case the triggering condition of the data migration is satisfied, the management device may perform steps 301 and 302 to perform the data migration.
Specifically, the triggering condition may be an external trigger, such as a trigger by an instruction sent by a business process or a hypervisor in the computing device, and such as a trigger when the management device detects that the power consumption of the computing system or the CXL storage pool is greater than a certain threshold; and the internal triggering can be also adopted, such as that a migration period is arranged in the management equipment, and data migration is carried out every other migration period.
The management device performs data migration, and needs to determine the source device and the destination device first, and then migrate the data in the source device to the destination device. Correspondingly, in the embodiment of the application, the source terminal equipment is the second storage equipment, and the destination terminal equipment is the first storage equipment.
Since more storage devices in the CXL storage pool are active, the power consumption of the computing system can be significant. Thus, for the purpose of reducing the storage devices in the active state (ACTIVE STATE), the management device may centrally store data in the various storage devices in the computing system, such that the utilization of more storage devices is 0, and then switch those storage devices with utilization of 0 from the active state to the idle state (IDLE STATE).
Wherein the idle state and the active state differ in whether or not there is traffic data in the storage device; when the service data exists in the storage device, the storage device is accessed by the service at a certain frequency; when the service data does not exist in the storage device, the utilization rate is 0, and no service accesses the storage device. It is understood that the power consumption of the memory device in the active state is greater than the power consumption in the idle state.
It will be appreciated that in some possible implementations, the utilization of the storage device in the absence of service data is not necessarily 0, and some configuration files that are not targeted for service access may be stored therein, at which time the storage device may also enter an idle state.
The residual capacity of the first storage device is not 0, the utilization rate of the second storage device is not 0, and the utilization rate is the ratio of the used space capacity and the specification capacity of the storage device. It can be understood that the remaining capacity of the storage device is 0, that is, when the storage space of the storage device is full, the data cannot be rewritten into the storage device, so that the storage device with the remaining capacity of 0 does not participate in data migration as the first storage device; however, since the purpose of data migration is to make more storage devices enter an idle state, the second storage device with the utilization rate of 0 may be regarded as entering or about to enter the idle state, and thus the storage device with the utilization rate of 0 does not participate in data migration as the second storage device.
The target parameter of the first storage device is greater than the target parameter of the second storage device, and the residual capacity of the first storage device is greater than or equal to the used space capacity of the second storage device, so that the service data of the second storage device can be completely migrated to the first storage device. The target parameters include utilization, specification capacity, ratio of specification capacity to operating power consumption, reciprocal of operating power consumption, and priority.
It should be noted that the number of the first storage devices may be one or more, and when the number of the first storage devices is one, the remaining capacity of the first storage device is the remaining capacity of the one first storage device, and when the number of the first storage devices is a plurality, the remaining capacity of the first storage device is the sum of the remaining capacities of the plurality of first storage devices.
In the embodiment of the application, the storage device with smaller target parameters is used as the second storage device, and the storage device with larger target parameters is used as the first storage device, so that more beneficial effects can be obtained.
When the target parameter is the utilization rate, the management device migrates the data of the second storage device with the lower utilization rate to the first storage device with the higher utilization rate, and the migration amount is smaller under the condition that the specification capacity of the first storage device and the specification capacity of the second storage device are the same, so that the computing resource can be saved.
It can be appreciated that in a computing system, a CXL storage pool is typically configured using storage devices of the same model or specification, so that each storage device in the storage pool can directly communicate data, and no additional configuration is required, thereby facilitating unified management of management devices.
In some implementations, the first storage device and the second storage device are both storage devices in a CXL storage pool.
In other embodiments, at least one of the first storage device and the second storage device is a storage device of the computing device, i.e., a local memory of the computing device.
Whether the storage device of the computing device participates in data migration or not can be determined according to actual service requirements and service characteristics. For example, when traffic is latency sensitive, the storage device of the computing device may participate in data migration to enable the traffic to access data with lower latency; the storage device of the computing device may not participate in the data migration when the local memory of the computing device is occupied, which may affect the operation of the service.
In one possible implementation, when the management device allocates storage space of the CXL storage pool for the computing device, the management device allocates the storage space of the entire storage device in the CXL storage pool to one computing device in units of storage devices; then, in the case where the trigger condition for data migration is satisfied, the management device may determine the first storage device and the second storage device in the storage devices of the plurality of CXL storage pools allocated to the same computing device, and the local memory of the computing device.
In another possible implementation, when allocating storage space for a CXL storage pool for a computing device, the management device divides the storage space of one storage device in the CXL storage pool into multiple portions that may be allocated to one or more computing devices.
Then, if the triggering condition of data migration is met, if the local memory of the computing device does not participate in data migration, the management device may record the attribution information of the data in the CXL storage pool first, and then execute steps 301 and 302; after the data migration is completed, configuring access rights of the storage space where the migrated data is located according to the attribution information, and finally returning the storage space address after the corresponding data migration to each computing device in the computing system.
If the local memory of the computing device participates in data migration and the local memory is used as the first storage device, the management device may migrate data belonging to the computing device in the second storage device to the local memory, and migrate data not belonging to the computing device to other first storage devices; if the local memory is the second storage device, the management device may determine the first storage device from the storage devices in the CXL storage pool.
Through the two possible implementations, the access security of the data in the computing system can be ensured, and access errors are avoided.
With knowledge of the above-mentioned first storage device and second storage device, a specific manner of determining the first storage device and the second storage device will be described below.
In one possible implementation, the management device may determine that the storage device in the computing system with the smallest target parameter is the second storage device; the first storage device is then determined from the storage devices in the computing system other than the second storage device.
Wherein the management device may first determine whether there is a storage device having a remaining capacity greater than or equal to the used space capacity of the second storage device; if the first storage device exists, determining one of the corresponding storage devices as a first storage device; if not, determining whether N storage devices with the sum of the residual capacities being greater than or equal to the used space capacity of the second storage device exist; if so, determining one combination from the corresponding storage device combinations as a first storage device; if not, N+1 is assigned to N, and then the step of determining N storage devices is performed again. Wherein N is a positive integer greater than or equal to 2. Illustratively, the initial value of N is 2.
Alternatively, when the management apparatus determines one first storage apparatus from among the plurality of eligible storage apparatuses, the storage apparatus in which the target parameter is largest may be determined as the first storage apparatus; when a combination is determined from a plurality of combinations of storage devices that meet the conditions, a combination in which the sum of the target parameters is the largest may be determined, with the storage device in the combination being the first storage device.
Since the purpose of step 302 is to cause the second storage device to enter an idle state, the management device may determine one second storage device and then determine one or more first storage devices to ensure that the traffic data in the second storage device can be completely migrated to the first storage device. In this way, by preferentially controlling the storage device with the smallest target parameter to enter an idle state, the power consumption of the computing system can be quickly reduced.
In one possible implementation, the management device may order the storage devices in the computing system from large to small according to the target parameters; then determining the first M storage devices in the sequencing result as first storage devices; and then determining a storage device with the used space capacity smaller than or equal to the sum of the residual capacities of the M first storage devices as a second storage device from storage devices except the first storage devices in the computing system.
Wherein M is a positive integer greater than or equal to 1.
If the management device determines that the second storage device corresponding to the M first storage devices does not exist, the management device assigns m+1 to M, and then executes the step of determining the M first storage devices again. Illustratively, M has an initial value of 1.
Alternatively, when the management apparatus determines one second storage apparatus from among the plurality of eligible storage apparatuses, the storage apparatus in which the target parameter is smallest may be determined as the second storage apparatus.
By preferentially storing the data in the first M storage devices in the target parameter ordering result, the beneficial effects corresponding to the target parameters can be maximized.
In one possible implementation, the target parameter is utilization; the management device may determine that the storage device with the utilization rate less than or equal to the first threshold is a second storage device, and mark the storage device with the utilization rate greater than the first threshold as an active storage device; then, one or more storage devices with the sum of the residual capacities larger than the used space capacity of the second storage device are determined as the first storage device from the active storage devices.
The first threshold is used for judging whether the storage device needs to be used as a source device for data migration or not; a storage device having a utilization rate below a first threshold may be considered a less energy efficient storage device, at which point the management device may migrate data therein to a more energy efficient storage device.
As with the possible implementation described above, when the management device needs to determine one of the plurality of storage devices having a remaining capacity larger than the used space capacity of the second storage device as the first storage device, the management device may determine the highest-utilization one of the plurality of storage devices as the first storage device; one of the plurality of storage devices having the largest specification capacity, the largest ratio of the specification capacity to the operating power consumption, the largest reciprocal of the operating power consumption, or the highest priority may also be determined as the first storage device.
The first threshold may be a preset threshold, or may be a threshold determined by the management device according to a real-time scene requirement. By determining the second storage device with the first threshold, the amount of data migration by the management device over a period of time can be controlled.
In another possible implementation, the management device may determine, from the active storage devices, that one or more storage devices whose sum of the remaining capacities is greater than the sum of the used space capacities of all the second storage devices are the first storage device.
It can be understood that the meaning of the storage device in the active state is different from that of the active storage device, and the storage device in the active state means that the utilization rate of the storage device is not 0, and the storage space of the storage device is occupied by service data; and active storage devices refer to storage devices to which the managed device adds a special tag.
In some possible implementations, the storage devices in the CXL storage pool are not all the same in size. It will be appreciated that storage devices of different sizes and capacities typically operate with different power consumption.
At this time, the management device may determine the first storage device and the second storage device by taking the utilization rate as a target parameter, taking the specification capacity or the operation power consumption as a target parameter, and combining one or more of the utilization rate, the specification capacity, and the operation power consumption as a target parameter.
In one possible implementation, the specification capacity of the first storage device is higher than the specification capacity of the second storage device, i.e. the management device selects a storage device with a larger specification capacity as the first storage device.
By preferentially migrating the data set to the storage devices with larger specification capacity, as many storage devices as possible can be in an idle state, as few storage devices as possible are in an active state, and the storage management cost of the computing system is minimized.
In one possible implementation, the ratio of the specification capacity and the operating power consumption of the first storage device is higher than the ratio of the specification capacity and the operating power consumption of the second storage device, i.e. the management device selects a storage device with a higher ratio of the specification capacity and the operating power consumption as the first storage device.
The higher the ratio of the specification capacity to the operation power consumption is, the larger the specification capacity of the storage device is or the smaller the operation power consumption is, and the storage device with the higher ratio of the specification capacity to the operation power consumption is used as the first storage device, so that the method of the embodiment of the application is more consistent with the purpose.
By preferentially migrating data sets to storage devices with higher ratios of specification capacity to operating power consumption, the power consumption of the computing system may be minimized as much as possible.
In one possible implementation, the target parameters include priorities, including pre-specified and non-pre-specified; the management device may determine that the first storage device is a pre-designated storage device and the second storage device is a non-pre-designated storage device other than the first storage device.
Wherein the management device may have pre-stored information indicating these pre-designated storage devices. The specific designated form can be a designated condition, such as the first N storage devices with the size of the storage devices ordered from big to small are prioritized; storage device prioritization for a specified type, such as a specified computing device; it is also possible to assign each storage device a priority as the first storage device or the second storage device.
In practical applications, the first storage device and the second storage device may be determined according to the actual situation of the storage devices in the CXL storage pool and the service requirement, or by adopting or combining the possible implementations, so that the power consumption and the storage management cost of the computing system reach a predetermined balance.
In one possible implementation, the storage devices in the CXL storage pool have the same specification capacity, and the first storage device and the second storage device are both storage devices in the CXL storage pool; under the condition that the triggering condition of data migration is met, the management equipment can acquire the sum of the used space capacities of all storage equipment in the CXL storage pool; dividing the sum of the used space capacities by the specification capacity to obtain a calculation result, and rounding up based on the calculation result to obtain a determined minimum number K; and finally, determining K storage devices in the CXL storage pool as first storage devices, and determining all storage devices except the K first storage devices in the CXL storage pool as second storage devices.
The first storage device and the second storage device are both storage devices in the CXL storage pool, which means that the local memory of the computing device does not participate in data migration.
Under the condition that the specification capacity of the storage devices participating in data migration is the same, the management device can calculate and obtain the total data quantity in all the storage devices currently participating in data migration; then calculating to obtain the minimum number K of the first storage devices required for storing the total data quantity; the management device can then determine K storage devices from the storage devices participating in data migration as the first storage device according to the corresponding rule, and determine that all storage devices participating in data migration other than the first storage device are the second storage devices.
Optionally, the management device may obtain a utilization rate of each storage device in the CXL storage pool; and then sequencing the storage devices in the CXL storage pool from high to low according to the utilization rate, and acquiring the first K storage devices in the sequencing result as first storage devices.
By selecting the first K storage devices in the utilization rate ranking as the first storage device, the remaining second storage devices have lower utilization rates, and the migrated data volume can be made as small as possible under the condition of the same specification capacity.
In some other possible implementations, the management device may also select the first K storage devices in the order of the reciprocal of the running power consumption from large to small as the first storage device to reduce the running power of the computing system as much as possible.
It will be appreciated that the above example of determining possible implementations of the first storage device and the second storage device is not limiting of the application, and that schemes based on the possible implementations for simple transformations and derivations are also within the scope of the application.
Step 302, the management device migrates the data in the second storage device to the first storage device, and switches the second storage device from the active state to the idle state.
The management device may migrate the service data in the second storage device to the first storage device, so that no service accesses the second storage device, and at this time, the management device may switch the second storage device from the active state to the idle state.
Optionally, the second storage device may automatically enter an idle state when no service is accessed within a preset period of time; the idle state may also be entered under control of the management device.
In one possible implementation, after the management device migrates the data in the second storage device to the first storage device, if a storage space allocation request is received from the computing device, the storage space of the storage device in an active state is allocated to the computing device in response to the storage space allocation request.
Wherein the storage space application request is for requesting the management device to allocate storage space for the computing device. The storage space application request may also be referred to as a storage space allocation request. The computing device informs the management device that the computing device needs to obtain the storage space by sending a storage space application request to the management device, so as to trigger the management device to execute a storage space allocation process. The memory application request may be a message having an encapsulation format specified by the CXL protocol.
By allocating the storage space of the storage device in the active state to the computing device, the storage device in the idle state can be kept in the idle state with low power consumption without participating in memory allocation as much as possible.
For example, the management device may determine, from the storage devices in the active state, a storage device whose remaining capacity can satisfy the storage space allocation request; the storage space of the first target storage device is finally allocated to the requester of the storage space allocation request, while the highest utilization rate is determined from among the storage devices with sufficient remaining capacity as the first target storage device.
If the storage devices in the active state do not have any storage devices with the residual capacity capable of meeting the storage space allocation request, combining the storage devices in the active state to obtain a storage device combination with the sum of the residual capacities capable of meeting the storage space allocation request; and then determining the combination with the highest average utilization rate in the storage device combinations, and distributing the storage space corresponding to the combination to a requester of the storage space distribution request.
After the data migration is completed, the management device may perform power consumption control on the storage devices in the idle state to further reduce power consumption, for example, control a plurality of storage devices in the idle state to power off in turn; the functions of data recovery, memory plugging and memory pooling sharing and the like can be better realized based on the storage equipment in the idle state.
For example, after migrating the data set to fewer storage devices, the management device may implement memory pooling sharing based on the storage devices; specifically, the storage space of the storage devices is used as a whole memory pool, and the application program of the computing device can apply for allocation of the storage space from the memory pool, so that the problem that storage fragments are generated due to the fact that storage space of the storage device is discontinuous in storage space area blocks occupied by different application programs, and a single fragment is too small to meet the requirement of a subsequent application program and is idle can be avoided, and at the moment, the management device can allocate the fragments in a combined mode, and the use efficiency of the storage device is improved.
For example, when a storage device failure occurs, the management device may perform data migration, and migrate service data in the failed storage device to the storage device that is running normally; specifically, the management device may determine, from the storage devices that are in an active state and are operating normally, a storage device whose remaining capacity is capable of storing service data in the failed storage device; then, one with the highest utilization rate is determined as a second target storage device from the storage devices with sufficient residual capacity, and the service data in the fault storage device is migrated to the second target storage device.
If the storage devices in the active state and operating normally have no storage devices with residual capacity capable of meeting the storage space allocation request, combining the storage devices in the active state to obtain a storage device combination with the sum of the residual capacities capable of storing service data in the fault storage device; and then determining the combination with the highest average utilization rate in the storage device combinations, and then migrating the service data in the fault storage device to the storage space corresponding to the combination.
Because the service data is migrated, the worker can hot plug the fault storage device at the moment without influencing service operation, and the whole computing system is not required to be overhauled after being powered down.
It will be appreciated that the management device may cycle through steps 301 and 302 until a termination condition is reached to reduce the power consumption and storage management costs of the computing system as much as possible.
In one implementation, the termination condition may be understood as that the power consumption and the storage management cost of the computing system reach the minimum, and when the power consumption and the storage management cost of the computing system do not reach the minimum, the termination condition is not satisfied, and the triggering condition of data migration is satisfied, where the management device may execute a round of steps 301-302; after executing a round of steps 301-302, continuing to determine whether the power consumption and storage management cost of the computing system are the lowest, if not, triggering step 301, continuing to execute a round of steps 301-302, and if so, stopping. This is repeated. After the power consumption and the storage management cost of the computing system reach the minimum, the management device also periodically confirms whether the power consumption and the storage management cost of the computing system reach the minimum.
The specific procedure by which the management device performs steps 301-302 is, for example, as follows:
The management device firstly determines that the residual capacity is not 0, and the storage device with the highest utilization rate is used as a first storage device; determining that the storage devices with the utilization rate not being 0 and lower than that of the first storage device are all second storage devices; then, the management device can sequentially migrate the data in each second storage device to the first storage device according to the order of low utilization rate; if the storage space of the first storage device is full, the full storage device can no longer be used as the first storage device, and at this time, the first storage device can be regarded as ending of a round of data migration.
In the above process, the first storage device is the storage device with the highest utilization rate and the remaining capacity is not 0, when no storage device with the utilization rate being not 0 and the utilization rate being lower than that of the first storage device exists in the computing system, it is indicated that the storage devices except the first storage device in the computing system are in a state with the utilization rate being 0 or the remaining capacity being 0 at this time, that is, the data in the computing system is concentrated in as few storage devices as possible, and as many storage devices in an idle state as possible, at this time, the management device may determine that the termination condition is met. Thus, the termination conditions for the above implementation are: for a first storage device, there is no second storage device; correspondingly, the triggering conditions are: for a first storage device, there is a second storage device.
At this time, the management device may determine a first storage device according to the implementation above, and then determine whether the second storage device exists; if so, continuing to execute step 302; if not, that is, the determined result of the second storage device step is the empty set, the management device ends the storage space management flow.
In other possible implementations, the termination condition may be that the average utilization of the storage device in an active state reaches a certain threshold, such as 90%; the termination condition may also be that the number of storage devices in active state is less than or equal to a certain value, or that the total power consumption of the storage devices in active state is less than or equal to a certain threshold.
In another implementation, the termination condition may be understood as that the average utilization rate of the storage device in an active state reaches a third threshold, and when the average utilization rate does not reach the third threshold, the management device may determine that the termination condition is not satisfied, and satisfies a triggering condition of data migration; the management device may then perform a round of steps 301-302; after executing a round of steps 301-302, the management device continues to determine whether the average utilization reaches a third threshold, if not, step 301 is triggered, the management device continues to execute a round of steps 301-302, if yes, stopping. This is repeated. After the average utilization rate reaches the third threshold value, the management device also periodically confirms whether the average utilization rate of the storage device in an active state reaches the third threshold value.
In one possible implementation, the management device may first determine a storage device with a utilization rate less than or equal to a first threshold value as a second storage device, and mark a storage device with a utilization rate greater than the first threshold value as an active storage device; then determining a first storage device from the active storage devices, and migrating data of a second storage device to the first storage device; after the storage space of the first storage device is fully written, determining the first storage device from the active storage devices with the residual capacity not being 0 again, and repeating the steps of migrating data and determining the first storage device until the data in all the second storage devices are migrated.
In another implementation, the termination condition may be understood as whether the utilization rate of any active storage device is less than or equal to the second threshold, and when the utilization rate of any active storage device is less than or equal to the second threshold, the termination condition is not satisfied, and the triggering condition of data migration is satisfied; at this time, the management device may determine that the active storage device with the utilization rate less than or equal to the second threshold is the second storage device, and delete the active storage device flag of the second storage device. It will be appreciated that when the storage device that deleted the active storage device marker is no longer active storage device.
Thereafter, the management device triggers and performs a step of determining a first storage device corresponding to the second storage device from among the active storage devices, and then performs step 302; after executing step 302, it is continuously determined whether there is an active storage device with a utilization rate less than or equal to the second threshold, if so, the above process is repeatedly executed, and if not, stopping. This is repeated. When the utilization rate of no active storage device is less than or equal to the second threshold value, the management device also periodically confirms whether the active storage device with the utilization rate less than or equal to the second threshold value exists.
The second threshold is also a preset threshold, and the second threshold may be greater than, less than or equal to the first threshold, that is, the magnitude relationship between the two is not limited in the embodiment of the present application.
By setting the first threshold and the second threshold and migrating the data in the second storage device with lower utilization rate to the first storage device with higher utilization rate based on the first threshold and the second threshold twice, the management device can be controlled not to migrate a large amount of data in a short time in the process of managing the storage space of the computing device, and the performance of the management device is ensured.
In the embodiment of the application, the data in the second storage device is migrated to the first storage device, and then the second storage device is controlled to be switched from the active state to the idle state, so that more storage devices can be in the idle state, and the power consumption of the computing system is reduced; meanwhile, the management equipment needs to manage less storage equipment in an active state, and can save storage management cost.
The following describes the flow of the storage space management method according to the embodiment of the present application in detail with reference to fig. 4 to 7.
Referring to fig. 4, fig. 4 is a flowchart illustrating a storage space management method according to an embodiment of the application, and the method is applied to the computing system shown in fig. 2. As shown in fig. 4, the method includes steps 401 to 413. In this embodiment, the local memory of the computing device participates in the data migration.
Step 401, the computing device sends a first storage space allocation request to a management device.
After the computing device is powered on, a processor of the computing device starts to execute application program codes stored in the computing device and runs corresponding application programs; after initialization is completed, the application program determines that the available memory of the computing device includes local memory and a storage device in a CXL storage pool in the computing system (hereinafter referred to as a CXL storage device). At this time, the application program will send a first storage space allocation request to a management device that manages the CXL storage device, requesting the management device to allocate a storage space of the CXL storage device for the first storage space allocation request; and requesting an operating system managing the local memory to allocate the storage space of the local memory for the operating system.
Optionally, each application program applies for the storage space of the local memory and the storage space of the CXL storage device.
Optionally, an application program for memory management in the computing device sends a first storage space allocation request to the management device; then taking the storage space of CXL storage equipment distributed by the management equipment and the storage space of the local memory distributed by the operating system as a memory pool; other applications in the computing device send allocation requests to the application for memory management when memory allocation is required.
The first storage space allocation request includes a first configuration capacity, where the first configuration capacity refers to a storage space size that needs to be allocated for the computing device.
Step 402, the management device determines a first target storage space according to the first storage space allocation request.
The management device may determine, according to the first storage space allocation request, a first target storage device having a remaining capacity greater than or equal to a first configuration capacity, or a first target storage device combination having a sum of remaining capacities greater than or equal to the first configuration capacity, from among CXL storage devices of the computing system; a first target storage space is then determined in the first target storage device or the first target storage device combination.
The size of the first target storage space is larger than or equal to the first configuration capacity.
Wherein, the management device records the use information of each CXL storage device.
Optionally, the management device includes a memory application table, where the memory application table is used to record the size of the storage space of the CXL storage device applied and released by each computing device; the management device may obtain the remaining capacity of each CXL storage device according to the memory application table, and then determine the first target storage device or the first target storage device combination from among the CXL storage devices according to the remaining capacity.
And each time the memory management program requests to allocate or release the storage space of the CXL storage device from the management device, the management device updates the memory application form according to the corresponding allocation result or release result.
Step 403, the management device allocates the first target storage space for the computing device.
Wherein, upon determining the first target storage space, the management device may directly allocate the first target storage space to the computing device. Specifically, the management device may set the access right of the first target storage space, so that the first target storage space is a storage space dedicated to the computing device; other computing devices in the computing system cannot access and apply for allocation of the first target storage space until the computing device releases the first target storage space.
After completing the allocation of the first target storage space, the management device may return address information of the first target storage space to the computing device.
Step 404, the management device detects whether a triggering condition of data migration is satisfied.
After the computing system runs for a period of time, service data of each computing device in the computing system may be stored in different storage devices, and more storage devices are frequently read and written and are in an active state, so that the power consumption of the computing system is higher. At this time, the management device can manage the storage space in the computing system, concentrate the business data in the computing system to fewer storage devices, and make more storage devices in idle state, so as to reduce the power consumption of the computing system.
Therefore, the management device may be preset with a triggering condition of data migration, and when the triggering condition is met, it is stated that the management device needs to perform storage management on the computing system, that is, perform data migration to reduce power consumption of the computing system.
The triggering condition may be an external trigger, such as a trigger by an instruction sent by a business process or a memory management program in the computing device, or a trigger when the management device detects that the power consumption of the computing system or the CXL storage pool is greater than a certain threshold; and the internal triggering can be also adopted, such as that a migration period is arranged in the management equipment, and data migration is carried out every other migration period.
Optionally, the management device is further preset with a migration parameter. The migration parameters may be parameters required by the management device to perform data migration, such as a first threshold and a second threshold in the embodiment shown in fig. 3, and further, for example, specify an identity, a condition, and/or a specified order of the storage devices; the trigger condition of data migration such as migration period can also be used.
Optionally, the migration parameter is entered into the management device by a user, manager or operation and maintenance personnel.
The management device may start to perform step 404 in a loop after power-up, or may start to perform step 404 in a loop after step 403 is performed for the first time, until the trigger condition is satisfied.
If it is detected that the triggering condition is currently satisfied, the management device may execute step 405; if it is detected that the trigger condition is not currently met, step 404 is performed again.
Step 405, if the triggering condition is met, the management device obtains migration information of each storage device in the computing system.
After determining that the triggering condition of data migration is met, the management device may acquire migration information of a local memory of each computing device and migration information of each CXL storage device, where the migration information is used to determine a source device and a destination device of data migration. Alternatively, the migration information may include at least one of a specification capacity, a utilization rate, and an operation power consumption.
When the specification capacity and the running power consumption of the local memory and the CXL storage device are the same, the management device may use the utilization rate as migration information.
In this embodiment, a cycle in which the management apparatus realizes data migration based on the utilization ratio will be described as an example by taking the utilization ratio as migration information. It will be appreciated that the management device may also implement data migration based on specification capacity, operating power consumption, or other parameters.
The management device may send a first query request to each computing device in the computing system to query the corresponding local memory utilization. The management device can also query information such as the model number, the specification capacity, the running power consumption, the idle state power consumption, the residual capacity and the like of the corresponding local memory from each computing device according to the computing requirement of data migration.
It may be appreciated that, when the management device performs data migration, if there is a pre-designated storage device that does not participate in data migration, the management device sends the first query request to all computing devices in the computing system except for the computing device where the designated storage device is located.
After receiving the first query request, the computing device may return the utilization rate of the local memory of the computing device to the management device.
Optionally, the management device monitors the read-write condition of each CXL storage device, and calculates the utilization rate of each CXL storage device.
Alternatively, the management device may send a second query request to each computing device allocated with the first target storage space, so as to obtain the utilization rate of the first target storage space, thereby calculating the utilization rate of each CXL storage device.
Step 406, the management device determines the first storage device and the second storage device according to the migration information.
The management device can determine a source device and a destination device of data migration from the local memory and the CXL storage device according to the migration information, so as to realize data migration. In this embodiment, the destination device is referred to as a first storage device, and the source device is referred to as a second storage device.
In one possible implementation, the local memory of the computing device does not participate in data migration, at which time the management device obtains migration information for each CXL storage device; and determining the first storage device and the second storage device in the CXL storage device according to the migration information.
Specifically, the implementation manner of determining the first storage device and the second storage device according to the migration information in step 406 of this embodiment is similar to the implementation manner of step 301 in the embodiment shown in fig. 3, and will not be described herein.
Step 407, the management device migrates the data in the second storage device to the first storage device.
When the second storage device comprises a local memory, the management device can send a migration instruction to the computing device where the local memory is located; the computing device migrates data in the local memory to a first storage device according to the migration instruction.
It can be appreciated that when the local memory is used as the second storage device, the local memory needs to at least leave the data content required by the operation system to enable the operation system to operate normally; when the local memory of the computing device includes a plurality of memory banks and some of the memory banks are sufficient to store data required for running the operating system, the computing device may migrate data of another portion of the plurality of memory banks to the first storage device.
Optionally, the migration instruction includes an address of the first storage device; and the computing equipment directly writes the data in the local memory into the address according to the migration instruction, and then deletes the data written into the first storage equipment in the local memory.
Optionally, the computing device sends the data in the local memory to the management device according to the migration instruction, and the management device forwards the data to the first storage device; the computing device then deletes the data in the local memory that has been sent to the management device.
Optionally, when the data of the local memory serving as the second storage device is migrated, after the utilization rate is 0, the processor of the computing device where the local memory is located may control the local memory to switch from the active state to the idle state.
When the second storage device includes the CXL storage device, the management device may directly migrate all of the data in the second storage device to the first storage device.
Optionally, the management device may read all data in the CXL storage device, and if the first storage device is a local memory of the computing device, the management device may send all data to a processor of the computing device, and the processor writes all data into the local memory; if the first storage device is a CXL storage device, the management device may directly write the entire data to the first storage device. And then the management device deletes all the data in the second storage device.
Step 408, the management device controls the second storage device to switch from the active state to the idle state.
After the data migration is completed, the management device can control the second storage device with the utilization rate of 0 in the computing system to be switched from the active state to the idle state, so that the power consumption is reduced, and the energy is saved.
For the CXL storage device with the utilization rate of 0, the management device can directly control the CXL storage device to be switched from an active state to an idle state; for a local memory of a computing device with a utilization rate of 0, the management device may send a control instruction to the computing device, so that the computing device controls the local memory to switch from an active state to an idle state.
Step 409, the management device detects whether a termination condition for data migration is satisfied.
Because of the large number of storage devices in the computing system, it may be difficult to achieve the desired effect of reducing the power consumption of the computing system with a single data migration, and thus the data migration process may be performed in a loop until the desired effect is achieved. Wherein the termination condition may be set based on the expected effect; when it is detected that the termination condition is currently satisfied, the management device may determine that the current data migration has achieved the intended effect.
Specific possible implementations of the termination condition may refer to the relevant description of step 302 in the embodiment shown in fig. 3, which is not repeated here.
After determining that the primary data migration is finished, the management device can detect whether a termination condition of the data migration is met; if not, indicating that the data migration needs to be continued, and returning the management device to execute step 405; if yes, determining that the data migration is completed, and terminating the cycle of the data migration by the management device.
Optionally, after executing step 405 to determine that the triggering condition of data migration is met, the management device may further pass through a preset maximum migration time, where the management device may determine that the current data migration is ended. Optionally, in step 408, the management device may send a migration instruction to the second storage device, and then pass the maximum migration time, where the management device may determine that the current data migration is ended.
Optionally, the management device may actively query the utilization rate of the second storage device, and determine that the current data migration is ended when the utilization rate of the second storage device is detected to be 0.
Optionally, the management device is responsible for migrating the data of the second storage device to the first storage device, and then the management device may determine that the current data migration is finished after writing the last byte of the data in the second storage device into the first storage device.
Step 410, the management device updates the running state table of the storage device.
The management device maintains an operation state table of the storage device, wherein the operation state table records the operation states of each local memory and CXL storage device in the computing system, such as the storage device is in an active state or an idle state; after the termination condition of the data migration is met, the management device may update the running state table according to the running states of the storage devices after the data migration.
Referring to fig. 5 and fig. 6, fig. 5 and fig. 6 are graphs showing the storage device utilization before and after data migration in the case where the local memory participates in the data migration.
As shown in fig. 5, the management device connects and manages the storage devices 1 to 4 and the local memory, and the utilization rates before data migration are respectively: the utilization of the storage device 1 is 70%, the utilization of the storage device 2 is 37%, the utilization of the storage device 3 is 65%, the utilization of the storage device 4 is 30%, and the utilization of the local memory is 40%. The storage devices 1 to 4 and the local memory are in an active state, and the specification capacities of the storage devices 1 to 4 and the local memory are the same.
The management device may perform data migration based on the utilization rate, and specifically may be:
The storage device 1 with the highest utilization rate is obtained as a first storage device, and the storage device 4 with the lowest utilization rate is obtained as a second storage device; then the data in the storage device 4 is migrated to the storage device 1, at which time the storage device 1 is full and the storage device 4 utilization is 0.
Then, the storage device 3 with the highest utilization rate in the storage devices with the residual capacity not being 0 is obtained as the first storage device, the storage device 2 with the lowest utilization rate in the storage devices with the utilization rate not being 0 is obtained as the second storage device, and then data in the storage device 2 is migrated to the storage device 3.
After the storage device 3 is fully written, the storage device 2 still has data, at this time, the local memory with the highest utilization rate in the storage device with the residual capacity not being 0 can be obtained as the first storage device, the storage device 2 with the lowest utilization rate in the storage device with the utilization rate not being 0 is obtained as the second storage device, and then the data in the storage device 2 is migrated to the local memory.
Through the above data migration process, a utilization ratio schematic diagram after data migration as shown in fig. 5 may be obtained, specifically: the utilization of the storage device 1 is 100%, the utilization of the storage device 2 is 0%, the utilization of the storage device 3 is 100%, the utilization of the storage device 4 is 0%, and the utilization of the local memory is 42%. At the same time, the storage device 2 and the storage device 4 having the utilization ratio of 0% enter an idle state.
At this time, the management device may determine that the termination condition of the data migration is met according to the current utilization rate of each storage device, and update the current running state of each storage device to the running state table.
As shown in fig. 6, the management device connects and manages the storage devices 1 to 4 and the local memory. The utilization rates before data migration are respectively as follows: the utilization of the storage device 1 is 15%, the utilization of the storage device 2 is 30%, the utilization of the storage device 3 is 15%, the utilization of the storage device 4 is 15%, and the utilization of the local memory is 15%. The storage devices 1 to 4 and the local memory are in an active state, and the specification capacities of the storage devices 1 to 4 and the local memory are the same.
The embodiment shown in fig. 6 differs from the embodiment shown in fig. 5 in that the local memory is a designated first storage device. Based on this, the management device may perform data migration based on the utilization rate, and specifically may be:
Taking a local memory as a first storage device, and taking the storage devices 1 to 4 as second storage devices; and sequentially migrating the data in the storage device 1, the storage device 3, the storage device 4 and the storage device 2 to a local memory according to the order of low utilization rate.
Through the above data migration process, a utilization ratio schematic diagram after data migration as shown in fig. 5 may be obtained, specifically: the utilization of the storage devices 1 to 4 is 0% and the utilization of the local memory is 90%. Meanwhile, the storage devices 1 to 4 having the utilization ratio of 0% enter an idle state.
At this time, the management device may determine that the termination condition of the data migration is met according to the current utilization rate of each storage device, and update the current running state of each storage device to the running state table.
Referring to fig. 7, fig. 7 is a comparison chart of storage device utilization before and after data migration in the case that the local memory does not participate in data migration.
As shown in fig. 7, the management device connects and manages the storage devices 1 to 4 and the local memory, and the utilization rates before data migration are respectively: the utilization of the storage device 1 is 70%, the utilization of the storage device 2 is 35%, the utilization of the storage device 3 is 65%, the utilization of the storage device 4 is 30%, and the utilization of the local memory is 40%. The storage devices 1 to 4 and the local memory are in an active state, and the specification capacities of the storage devices 1 to 4 and the local memory are the same.
The management device may perform data migration based on the utilization rate, and specifically may be:
the storage device 1 with the highest utilization rate is obtained as a first storage device, and the storage device 4 with the lowest utilization rate is obtained as a second storage device; then the data in the storage device 4 is migrated to the storage device 1, at which time the storage device 1 is full and the storage device 4 utilization is 0. Then, the storage device 3 with the highest utilization rate in the storage devices with the residual capacity not being 0 is obtained as the first storage device, the storage device 2 with the lowest utilization rate in the storage devices with the utilization rate not being 0 is obtained as the second storage device, and then data in the storage device 2 is migrated to the storage device 3.
Through the above data migration process, a utilization ratio schematic diagram after data migration as shown in fig. 7 may be obtained, specifically: the utilization of the storage device 1 is 100%, the utilization of the storage device 2 is 0%, the utilization of the storage device 3 is 100%, the utilization of the storage device 4 is 0%, and the utilization of the local memory is 40%. At the same time, the storage device 2 and the storage device 4 having the utilization ratio of 0% enter an idle state.
At this time, the management device may determine that the termination condition of the data migration is met according to the current utilization rate of each storage device, and update the current running state of each storage device to the running state table.
Step 411, the computing device sends a second storage allocation request to the management device.
After the management device completes at least one data migration, when the memory space applied by the application program is insufficient, the computing device may send a second memory space allocation request to the management device to apply for more memory space.
The second storage space allocation request includes a second configuration capacity, where the second configuration capacity refers to a storage space size that needs to be allocated again by the management device for the computing device.
Optionally, when the application program requests the operating system to allocate the second configuration capacity from the local memory in the active state, the application program sends the second storage space allocation request to the management device.
Step 412, the management device determines, according to the second storage space allocation request, a second target storage space from the storage devices in the active state.
Wherein, after completing at least one data migration, the management device starts controlling the power consumption of the computing system so as to be as low as possible. Specifically, the management device starts to control as many storage devices as possible to be in an idle state and as few storage devices as possible to be in an active state. Therefore, after receiving the second storage space allocation request, the management device may determine, from among the CXL storage devices in the active state, a second target storage device having a remaining capacity greater than or equal to the second configuration capacity, or a second target storage device combination having a sum of remaining capacities greater than or equal to the second configuration capacity, and then determine a second target storage space from among available storage spaces of the second target storage device or the second target storage device combination.
Specifically, the management device may query the running state table of the storage device updated in step 410 to determine the CXL storage device in the active state.
The size of the second target storage space is equal to the second configuration capacity.
Other possible determining manners may refer to determining a portion of the first target storage space in step 403 of this embodiment, where the first target storage space corresponds to the second target storage space in step 412, and will not be described herein.
Step 413, the management device allocates the second target storage space to the computing device.
Wherein, after determining the second target storage space, the management device may directly allocate the second target storage space in the CXL storage device to the computing device; address information for the second target storage space is then returned to the computing device.
In the embodiment of the application, the management device executes the data migration cycle and updates the running state table of the storage device in real time, so that when the computing device requests to allocate the storage space of the CXL storage device, the management device can control the CXL storage device with the second target storage space allocated to the computing device to concentrate on the active state, so that the CXL storage device with the idle state continues to run with low power consumption, thereby being capable of maintaining the low power consumption state of the computing system and saving energy.
In the embodiment of the application, the local memory of the computing device can not participate in data migration, so that the storage devices participating in data migration are easier to unify model, specification capacity or power consumption, and the process of determining the first storage device and the second storage device by the management device is simpler and more accurate.
As shown in fig. 8, fig. 8 is a schematic diagram of a possible logic structure of a management device according to an embodiment of the present application. The management apparatus 800 includes: processor 801, communication interface 802, memory 803, and bus 804, processor 801, communication interface 802, and memory 803 are interconnected by bus 804. In an embodiment of the application, the processor 801 is used to control and manage the actions of the computing device 800, e.g., the processor 801 is used to run FM management software and perform steps in any of the embodiments of fig. 3, 4, or 7 and/or other processes for the techniques described herein. The communication interface 802 is used to support the management device 800 to communicate; in particular, communication interface 802 can include a first interface for connecting computing devices and a second interface for connecting storage devices in a CXL storage pool. The memory 703 is used to store program codes and data for the management device 800.
The processor 801 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so forth. Bus 804 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
Wherein the management device 800 shown in fig. 8 may be applied in a computing system as shown in fig. 1a, 1b or 2.
In another embodiment of the present application, there is also provided a computing system including: the system comprises a computing device, a management device and a CXL storage pool, wherein the CXL storage pool comprises a plurality of storage devices, the computing device is connected with the management device, and the management device is used for distributing and managing storage space in the CXL storage pool; the management device is used to implement the storage space management method described in the embodiments of fig. 3 or fig. 4 above.
In another embodiment of the present application, there is also provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by at least one processor of a device, perform the storage space management method described in the embodiment of fig. 3 or fig. 4 above.
In another embodiment of the present application, there is also provided a computer program product comprising computer-executable instructions stored in a computer-readable storage medium; the at least one processor of the device may read the computer-executable instructions from the computer-readable storage medium, the at least one processor executing the computer-executable instructions causing the device to perform the storage space management method described in the embodiments of fig. 3 or fig. 4 above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be embodied in essence or a part contributing to the prior art or a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (10)
1. The storage space management method is characterized in that the method is applied to management equipment, the management equipment is used for managing and calculating a quick link CXL storage pool and a computing equipment, the computing equipment and the CXL storage pool comprise storage equipment, and the storage equipment comprises a first storage equipment and a second storage equipment; the method comprises the following steps:
Determining the first storage device and the second storage device, wherein the target parameter of the first storage device is larger than the target parameter of the second storage device, and the residual capacity of the first storage device is larger than or equal to the used space capacity of the second storage device, and the target parameter comprises the utilization rate, the specification capacity, the ratio of the specification capacity to the running power consumption, or the priority;
And migrating the data in the second storage device to the first storage device, and switching the second storage device from an active state to an idle state so as to reduce the power consumption of the second storage device.
2. The method of claim 1, wherein the determining the first storage device and the second storage device comprises:
Determining the storage device with the minimum target parameter as the second storage device;
The first storage device is determined from the storage devices other than the second storage device.
3. The method of claim 1, wherein the target parameter is the utilization rate; the determining the first storage device and the second storage device includes:
Determining the storage device with the utilization rate smaller than or equal to a first threshold value as the second storage device, and marking the storage device with the utilization rate larger than the first threshold value as an active storage device;
And determining the first storage device from the active storage devices.
4. The method of claim 1, wherein the storage devices in the CXL storage pool have the same size and capacity, and wherein the first storage device and the second storage device are both storage devices in the CXL storage pool;
the determining the first storage device and the second storage device includes:
Acquiring the sum of the used space capacities of all storage devices in the CXL storage pool;
Dividing the sum of the used space capacities by the specification capacity to obtain a calculation result, and rounding up based on the calculation result to obtain a minimum number K;
And determining K storage devices in the CXL storage pool as the first storage device, and determining all storage devices except the first storage device in the CXL storage pool as the second storage device.
5. The method of claim 4, wherein the determining that K storage devices in the CXL storage pool are the first storage device comprises:
acquiring the utilization rate of each storage device in the CXL storage pool;
And sequencing the storage devices in the CXL storage pool from high to low according to the utilization rate, and determining the first K storage devices in the sequencing result as the first storage device.
6. The method of any of claims 1-5, wherein prior to the determining the first storage device and the second storage device, or after the switching the second storage device from an active state to an idle state, the method further comprises:
Determining whether a triggering condition of data migration is met, wherein the triggering condition comprises that the power consumption and the storage management cost of a computing system are not the lowest, and/or the average utilization rate of storage equipment in an active state in the computing system is not a third threshold, and the computing system comprises the computing equipment, the management equipment and the CXL storage pool;
and if yes, triggering the step of determining the first storage device and the second storage device.
7. The method of claim 3, wherein after the switching the second storage device from the active state to the idle state, the method further comprises:
determining whether a triggering condition of data migration is met, wherein the triggering condition comprises that the utilization rate of any active storage device is smaller than or equal to a second threshold value;
If yes, determining that the active storage device with the utilization rate smaller than or equal to the second threshold value is the second storage device, and deleting an active storage device mark of the second storage device;
Triggering the step of determining the first storage device from the active storage devices.
8. The method of any one of claims 1-7, wherein the first storage device and the second storage device are both storage devices in the CXL storage pool; or at least one of the first storage device and the second storage device is a storage device of the computing device.
9. The method of any of claims 1-8, wherein after the migrating data in the second storage device to the first storage device, the method further comprises:
receiving a storage space allocation request sent by the computing equipment;
And in response to the storage space allocation request, allocating storage space of the storage device in the active state for the computing device.
10. A management apparatus, characterized in that the management apparatus comprises: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the management device to implement the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410048188.0A CN118069575A (en) | 2024-01-12 | 2024-01-12 | Storage space management method and management equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410048188.0A CN118069575A (en) | 2024-01-12 | 2024-01-12 | Storage space management method and management equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118069575A true CN118069575A (en) | 2024-05-24 |
Family
ID=91108159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410048188.0A Pending CN118069575A (en) | 2024-01-12 | 2024-01-12 | Storage space management method and management equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118069575A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118474209A (en) * | 2024-07-11 | 2024-08-09 | 山东海量信息技术研究院 | Memory expansion system and data package packaging method, device, medium and product thereof |
-
2024
- 2024-01-12 CN CN202410048188.0A patent/CN118069575A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118474209A (en) * | 2024-07-11 | 2024-08-09 | 山东海量信息技术研究院 | Memory expansion system and data package packaging method, device, medium and product thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107690622B9 (en) | Method, equipment and system for realizing hardware acceleration processing | |
US10387202B2 (en) | Quality of service implementation in a networked storage system with hierarchical schedulers | |
JP5510556B2 (en) | Method and system for managing virtual machine storage space and physical hosts | |
US20200241798A1 (en) | Memory system and method for controlling nonvolatile memory | |
US10552936B2 (en) | Solid state storage local image processing system and method | |
CN110096220B (en) | Distributed storage system, data processing method and storage node | |
JP7467593B2 (en) | Resource allocation method, storage device, and storage system - Patents.com | |
US20220114086A1 (en) | Techniques to expand system memory via use of available device memory | |
US20210326177A1 (en) | Queue scaling based, at least, in part, on processing load | |
CN107783812B (en) | Virtual machine memory management method and device | |
CN118069575A (en) | Storage space management method and management equipment | |
CN113204407B (en) | Memory supermanagement method and device | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
CN111104219A (en) | Binding method, device, equipment and storage medium of virtual core and physical core | |
CN108139983A (en) | Method and apparatus for fixing memory pages in multi-level system memory | |
CN116521608A (en) | Data migration method and computing device | |
CN114063894A (en) | Coroutine execution method and coroutine execution device | |
CN112202843A (en) | High-availability system and super-fusion system of control node | |
CN117806851A (en) | Multi-host shared memory system, memory access method, device and storage medium | |
CN107025179A (en) | Memory devices and method | |
CN115495433A (en) | Distributed storage system, data migration method and storage device | |
CN116483740B (en) | Memory data migration method and device, storage medium and electronic device | |
CN114281516A (en) | Resource allocation method and device based on NUMA attribute | |
CN110447019B (en) | Memory allocation manager and method for managing memory allocation performed thereby | |
CN116560785A (en) | Method and device for accessing storage node and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |