CN117492967B - Method, device, equipment and medium for managing storage system resources - Google Patents

Method, device, equipment and medium for managing storage system resources Download PDF

Info

Publication number
CN117492967B
CN117492967B CN202311841280.4A CN202311841280A CN117492967B CN 117492967 B CN117492967 B CN 117492967B CN 202311841280 A CN202311841280 A CN 202311841280A CN 117492967 B CN117492967 B CN 117492967B
Authority
CN
China
Prior art keywords
resource
resource pool
global
resources
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311841280.4A
Other languages
Chinese (zh)
Other versions
CN117492967A (en
Inventor
徐玉显
孙京本
刘清林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202311841280.4A priority Critical patent/CN117492967B/en
Publication of CN117492967A publication Critical patent/CN117492967A/en
Application granted granted Critical
Publication of CN117492967B publication Critical patent/CN117492967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the field of computers, and provides a method, a device, equipment and a medium for managing storage system resources, wherein the method comprises the following steps: creating a separate scheduling domain for each path of CPU and memory resources of the storage system; respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and setting a resource linked list; responding to system initialization, distributing IO resources in the global IO resource pool to corresponding local IO resource pools, and recording in a resource linked list; in response to the need of adding IO resources in a local IO resource pool, the local IO resource pool applies IO resources to a global IO resource pool; and responding to the success of applying the local IO resource pool for the IO resource from the global IO resource pool, distributing the IO resource to the local IO resource pool, and recording. By using the scheme of the invention, the IO processing path can be increased, the IO processing capacity is improved, the cross-path access of resources is reduced, the overhead caused by the cross-socket access of the memory is reduced, and the optimal configuration of IO resources is ensured.

Description

Method, device, equipment and medium for managing storage system resources
Technical Field
The present invention relates to the field of computers, and more particularly, to a method, apparatus, device, and medium for storage system resource management.
Background
Under the background of mass storage, big data and AI age, the demands of various industries on storage services are continuously increased, the software and hardware architecture of the storage system is continuously upgraded and optimized to meet the service growth and performance improvement, the hardware architecture is upgraded from a single-channel CPU architecture to two-channel or even multiple-channel, the software service is upgraded and optimized to the upgrading of a hardware platform, otherwise, the advantages brought by hardware upgrading are difficult to be fully exerted, for example, under the two-channel architecture, the resource management method of the storage system can hardly fully exert the advantages of a multiprocessor if the technology of the single-channel architecture is still adopted.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method, an apparatus, a device, and a medium for managing resources of a storage system, by using the technical solution of the present invention, an IO processing path can be increased, an IO processing capability is improved, a cross-path access of resources is reduced, an overhead caused by accessing a memory across a socket is reduced, and an IO resource is guaranteed to be in an optimal configuration.
Based on the above objects, an aspect of an embodiment of the present invention provides a method for storage system resource management, including the steps of:
creating a separate scheduling domain for each path of CPU and memory resources of the storage system;
respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively setting a resource linked list in the global IO resource pool and each local IO resource pool;
responding to system initialization, distributing IO resources in the global IO resource pool to the corresponding local IO resource pools, and recording distribution conditions in the respective resource linked lists;
in response to a local IO resource pool needing to add IO resources, the local IO resource pool applies IO resources to the global IO resource pool;
and responding to the local IO resource pool to apply for IO resources to the global IO resource pool successfully, distributing corresponding IO resources to the local IO resource pool by the global IO resource pool, and recording distribution conditions in resource chain tables of the global IO resource pool and the local IO resource pool.
According to one embodiment of the present invention, the step of establishing a global IO resource pool and a plurality of local IO resource pools in each of the scheduling domains, and setting a resource linked list in the global IO resource pool and each of the local IO resource pools includes:
Creating a global IO resource pool in each scheduling domain, wherein the global IO resource pool is positioned on a physical memory directly connected with a CPU in an architecture to which the scheduling domain belongs;
respectively creating a plurality of local IO resource pools in each scheduling domain, and connecting each local IO resource pool to the corresponding global IO resource pool;
and setting a resource linked list in the global IO resource pool and each local IO resource pool respectively for recording IO resource information.
According to one embodiment of the present invention, the step of allocating IO resources in the global IO resource pool to corresponding local IO resource pools in response to system initialization, and recording allocation conditions in the respective resource linked lists includes:
responding to system initialization, and applying for IO resources with corresponding quantity from the global IO resource pool according to static quota;
distributing the applied IO resources to the corresponding local IO resource pools;
and respectively recording the information of the allocated IO resources in the resource linked lists in the global IO resource pool and the local IO resource pool.
According to one embodiment of the present invention, further comprising:
responding to failure of the local IO resource pool to apply IO resources to the global IO resource pool, applying IO resources to the global IO resource pools of other scheduling domains by the global IO resource pool, scheduling IO resources, and recording scheduling conditions in the respective resource linked lists.
According to an embodiment of the present invention, the step of responding to the failure of the local IO resource pool to apply for IO resources to the global IO resource pool, applying for IO resources to global IO resource pools of other scheduling domains by the global IO resource pool and performing IO resource scheduling, and recording scheduling conditions in the respective resource linked lists includes:
responding to failure of applying IO resources from the local IO resource pool to the global IO resource pool, and applying IO resources from the global IO resource pool to global IO resource pools of other scheduling domains by the global IO resource pool;
the global IO resource pools of the other scheduling domains transfer corresponding IO resources to the global IO resource pools from which the application is sent, and the scheduling conditions are recorded in resource linked lists of the two global IO resource pools respectively;
the global IO resource pool from which the application is sent distributes the obtained IO resources to the local IO resource pool from which the application is sent, and the distribution condition is recorded in the global IO resource pool and the resource linked list in the local IO resource pool.
According to one embodiment of the present invention, further comprising:
responding to the IO pressure of the scheduling domain to last for a first preset time to be lower than a preset value, and releasing IO resources in each local IO resource pool into the global IO resource pool;
And recording release conditions in the resource linked lists in the local IO resource pool and the global IO resource pool.
According to one embodiment of the present invention, further comprising:
responding to the IO resources released by the local IO resource pool as IO resources of other scheduling domains, and releasing the corresponding IO resources into the global IO resource pool of the other scheduling domains by the global IO resource pool;
and recording release conditions in resource linked lists in the global IO resource pool and the global IO resource pools of other scheduling domains.
According to one embodiment of the present invention, further comprising:
counting the number of unassigned IO resources in IO resources applied to global IO resource pools of other scheduling domains by the global IO resource pools in a resource chain table;
comparing the counted number with a set threshold;
releasing the unassigned IO resources to a global IO resource pool of the other scheduling domains within a second preset time in response to the counted number exceeding the set threshold;
and recording release conditions in resource linked lists of the global IO resource pool and the global IO resource pools of the other scheduling domains.
According to one embodiment of the present invention, further comprising:
counting the number of allocated IO resources in IO resources applied to global IO resource pools of other scheduling domains by the global IO resource pools in a resource linked list;
Releasing unassigned IO resources to a global IO resource pool of the other scheduling domains within a fourth preset time in response to the counted number lasting for a third preset time being lower than a preset value;
and recording release conditions in resource linked lists of the global IO resource pool and the global IO resource pools of the other scheduling domains.
According to one embodiment of the present invention, further comprising:
responding to the fact that the IO pressure of a local IO resource pool is continuously lower than a pressure threshold for a fifth preset time, and reducing the static quota of the local IO resource pool;
releasing IO resources corresponding to the reduced static quota to the global IO resource pool by the local IO resource pool, and recording release conditions in resource linked lists of the local IO resource pool and the global IO resource pool.
According to one embodiment of the present invention, the resource linked list in each of the IO resource pools includes an IO resource linked list free of a resource pool, an IO resource linked list allocated to the resource pool, an IO resource linked list in a free state in the IO resources borrowed from the far end, and an IO resource linked list in an allocated state in the IO resources borrowed from the far end.
According to one embodiment of the present invention, further comprising:
Counting the current IO pressure of each scheduling domain;
and in response to receiving a new IO task, issuing the IO task to a scheduling domain with the minimum current IO pressure for processing.
In another aspect of the embodiment of the present invention, there is also provided an apparatus for managing resources of a storage system, the apparatus including:
the system comprises a creation module, a scheduling module and a scheduling module, wherein the creation module is configured to create a separate scheduling domain for CPU and memory resources of each path of the storage system;
the setting module is configured to respectively establish a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively set a resource linked list in the global IO resource pool and each local IO resource pool;
the allocation module is configured to respond to system initialization, allocate IO resources in the global IO resource pool to corresponding local IO resource pools, and record allocation conditions in respective resource linked lists;
the application module is configured to respond to the fact that the local IO resource pool needs to increase IO resources, and the local IO resource pool applies IO resources to the global IO resource pool;
and the scheduling module responds to the success of applying IO resources from the local IO resource pool to the global IO resource pool, the global IO resource pool distributes the corresponding IO resources to the local IO resource pool, and the distribution condition is recorded in the global IO resource pool and the resource linked list of the local IO resource pool.
In another aspect of the embodiments of the present invention, there is also provided a computer apparatus including:
at least one processor; and
and a memory storing computer instructions executable on the processor, the instructions when executed by the processor performing the steps of any of the methods described above.
In another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the methods described above.
The invention has the following beneficial technical effects: the method for managing the storage system resources provided by the embodiment of the invention establishes an independent scheduling domain by aiming at the CPU and the memory resources of each path of the storage system;
respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively setting a resource linked list in the global IO resource pool and each local IO resource pool; responding to system initialization, distributing IO resources in the global IO resource pool to the corresponding local IO resource pools, and recording distribution conditions in the respective resource linked lists; in response to a local IO resource pool needing to add IO resources, the local IO resource pool applies IO resources to the global IO resource pool; in response to the local IO resource pool applying for IO resources to the global IO resource pool, the global IO resource pool distributes corresponding IO resources to the local IO resource pool, and the technical scheme of recording distribution conditions in the global IO resource pool and a resource linked list of the local IO resource pool can increase IO processing paths, promote IO processing capacity, reduce cross-path access of resources, reduce overhead caused by cross-socket access memory, and ensure that IO resources are in an optimal configuration.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart diagram of a method of storage system resource management in accordance with one embodiment of the invention;
FIG. 2 is a schematic diagram of a dual processor architecture according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a dispatch domain in a two-way processor architecture according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of an apparatus for storage system resource management in accordance with one embodiment of the invention;
FIG. 5 is a schematic diagram of a computer device according to one embodiment of the invention;
fig. 6 is a schematic diagram of a computer-readable storage medium according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
The scheme of the invention can be used in a dual-path processor architecture, and can also be used in a more-path processor architecture, and the invention is described by taking the dual-path processor architecture as an example, wherein the dual-path processor architecture is shown in fig. 2, each path has a multi-core CPU (also called a processor or socket), and each path has a local memory bank (shown by DIMM) directly connected with the CPU of the path. The two paths are connected by a high-speed interconnect channel between the CPUs (e.g., UPI for Intel, xGMI for AMD, etc.). The two-way combination improves the operation capability of the platform, but the two-way platform has an obvious characteristic that the efficiency of accessing the direct-connection memory by the CPU is obviously higher than that of accessing the remote memory (the memory of the other way), because the remote memory can be accessed only through a high-speed interconnection channel between the CPUs and the remote CPU, and the cache synchronization efficiency between the two-way CPUs is poorer than that inside the CPU. Aiming at the technical characteristic of a double-path architecture, the invention provides a storage system resource management method based on the double-path architecture, aiming at ensuring that the management of resources can not only exert the improvement of computation power brought by multiple CPUs, but also reduce the influence of memory cross-path access overhead under the double-path architecture.
IO resources in a storage system are memory objects in nature, the storage system is a high concurrency system, and access and processing of the memory objects are very frequent, so that the IO resources are accessed to avoid cross-path access as much as possible, which is a problem to be solved by resource management under a two-way architecture. The storage system is provided with a scheduling module at an application layer, and is used for scheduling IO tasks of service modules at all layers, a scheduling domain and a resource pool are divided according to a two-way architecture, each path corresponds to one scheduling domain and local IO resources, the scheduling domain is divided based on sockets, the IO resources are divided according to physical memory positions of the IO resources, the sockets of which path are directly connected, and the resources are divided into which path. Thus, each path has independent task scheduling and resource management, i.e., relatively independent IO paths. Therefore, the IO processing path is increased, the IO processing capacity is improved, the cross-path access of resources is reduced, and the overhead caused by the cross-socket access of the memory is reduced as a whole. Meanwhile, in order to adapt to the scene of double-path pressure imbalance, the invention also provides a self-adaptive adjustment strategy and method of IO resources when the double-path pressure imbalance exists, so as to meet different requirements of pressure at two sides as much as possible and ensure that the IO resources are in an optimal configuration.
With the above object in view, in a first aspect, an embodiment of a method for storage system resource management is provided. Fig. 1 shows a schematic flow chart of the method.
As shown in fig. 1, the method may include the steps of:
s1, creating a separate scheduling domain for the CPU and memory resources of each path of the storage system. As shown in FIG. 3, the storage system has its own scheduling module at the application layer for scheduling IO tasks of service modules at each layer, and first, the scheduling domains (schedule domains) are divided according to a two-way architecture, each schedule domain is relatively independent, each schedule domain has a complete IO path therein, that is, an IO is processed in either schedule domain-0 or schedule domain-1, and tasks in the processing process basically do not cross schedule domains. The task processing thread in each schedule domain is bound to the CPU core to which each schedule domain belongs, and when IO is issued to each business module for processing, the task to be processed in the next step is put into the same schedule domain for execution according to the schedule domain where the current task processing is located, so that each path has an independent IO processing path.
S2, respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively setting a resource linked list in the global IO resource pool and each local IO resource pool. The IO paths are divided according to the schedule domain, and the IO resources are also divided according to the schedule domain. Then, a global IO resource pool is built in the system, the internal memory resources which are idle at first are divided into two global IO resource pools according to the physical position of the internal memory, and the two global IO resource pools belong to different schedule domains. As shown in FIG. 3, there are two G_IO_Res (global IO Resource, global resource pool) pools, which are in different schedule domains. The memory in each G_IO_Res pool is located on the physical memory directly connected by the path of CPU to which domain belongs, and is distributed and used in the IO processing path of the domain. The IO resources of each layer of traffic modules are in its own pool of L_IO_Res (Local IO Resource, local resource pool). Each layer of service modules has two L_IO_Res pools and is allocated and used in the IO processing paths of the respective domains. The structures of G_IO_Res and L_IO_Res are basically the same, and mainly 4 members are provided: free_ list, allocated _list, remote_free_list, and remote_allocated_list. free_list is the free list of resources for the pool, allocated_list is the list of resources allocated for the pool, remote_free_list is the remote borrowed resource, free list of resources, remote_allocated_list is the remote borrowed resource, and allocated list of resources.
S3, responding to system initialization, distributing IO resources in the global IO resource pool to the corresponding local IO resource pool, and recording distribution conditions in respective resource linked lists. In the system initialization stage, each layer of service module applies for a corresponding number of IO resources from the G_IO_Res pool according to respective static quota and puts the IO resources into the L_IO_Res pool of the service module. Specifically, a certain number of IO resources are transferred from the free_list of the G_IO_Res to the free_list of the service module L_IO_Res pool, and the part of the resources are recorded in the allocated_list of the G_IO_Res. Because each layer of service module has two L_IO_Res pools which belong to different domains, when the L_IO_Res pool applies for IO resources from the G_IO_Res pool, the G_IO_Res pool which is the same domain as the L_IO_Res pool is always selected.
S4, in response to the fact that the local IO resource pool needs to increase IO resources, the local IO resource pool applies IO resources to the global IO resource pool. With the increase of IO pressure, each layer of service module may need to dynamically expand its L_IO_Res pool to meet higher performance, and the process of expanding its IO resources by the service module is also a process of applying for a corresponding number of IO resources from the G_IO_Res pool to put into its L_IO_Res pool, but if there is not enough IO resources on free_list of the G_IO_Res pool in this domain, then the application will fail, so that an attempt may be made to borrow IO resources to another domain (i.e. another scheduling domain).
S5, responding to the local IO resource pool to apply IO resources to the global IO resource pool successfully, distributing corresponding IO resources to the local IO resource pool by the global IO resource pool, and recording distribution conditions in resource chain tables of the global IO resource pool and the local IO resource pool. And transferring a certain amount of applied IO resources from the free_list of the G_IO_Res to the free_list of the service module L_IO_Res pool, and recording the part of resources into the allocated_list of the G_IO_Res.
By using the technical scheme of the invention, the IO processing path can be increased, the IO processing capacity is improved, the cross-path access of resources is reduced, the overhead caused by the cross-socket access of the memory is reduced, and the IO resources are ensured to be in an optimal configuration.
In a preferred embodiment of the present invention, the steps of establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and setting a resource linked list in the global IO resource pool and each local IO resource pool include:
creating a global IO resource pool in each scheduling domain, wherein the global IO resource pool is positioned on a physical memory directly connected with a CPU in an architecture to which the scheduling domain belongs;
respectively creating a plurality of local IO resource pools in each scheduling domain, and connecting each local IO resource pool to a corresponding global IO resource pool;
And setting a resource linked list in the global IO resource pool and each local IO resource pool respectively for recording IO resource information. A global IO resource pool is built in the system, and the internal memory resources are initially idle, and are divided into two global IO resource pools according to the physical position of the internal memory, and the two global IO resource pools belong to different schedule domains. As shown in FIG. 3, there are two G_IO_Res (global IO Resource, global resource pool) pools, which are in different schedule domains. The memory in each G_IO_Res pool is located on the physical memory directly connected by the path of CPU to which domain belongs, and is distributed and used in the IO processing path of the domain. The IO resources of each layer of traffic modules are in its own pool of L_IO_Res (Local IO Resource, local resource pool). Each layer of service modules has two L_IO_Res pools and is allocated and used in the IO processing paths of the respective domains. The structures of G_IO_Res and L_IO_Res are basically the same, and mainly 4 members are provided: free_ list, allocated _list, remote_free_list, and remote_allocated_list. free_list is the free list of resources for the pool, allocated_list is the list of resources allocated for the pool, remote_free_list is the remote borrowed resource, free list of resources, remote_allocated_list is the remote borrowed resource, and allocated list of resources.
In a preferred embodiment of the present invention, in response to system initialization, the steps of allocating IO resources in the global IO resource pool to corresponding local IO resource pools and recording allocation conditions in respective resource linked lists include:
responding to system initialization, and applying for IO resources with corresponding quantity from a global IO resource pool according to static quota;
distributing the applied IO resources to the corresponding local IO resource pools;
and respectively recording the information of the allocated IO resources in a resource chain table in the global IO resource pool and the local IO resource pool. In the system initialization stage, each layer of service module applies for a corresponding number of IO resources from the G_IO_Res pool according to respective static quota and puts the IO resources into the L_IO_Res pool of the service module. Specifically, a certain number of IO resources are transferred from the free_list of the G_IO_Res to the free_list of the service module L_IO_Res pool, and the part of the resources are recorded in the allocated_list of the G_IO_Res. Because each layer of service module has two L_IO_Res pools which belong to different domains, when the L_IO_Res pool applies for IO resources from the G_IO_Res pool, the G_IO_Res pool which is the same domain as the L_IO_Res pool is always selected.
In a preferred embodiment of the present invention, further comprising:
Responding to failure of the local IO resource pool to apply IO resources to the global IO resource pool, applying IO resources to the global IO resource pools of other scheduling domains by the global IO resource pool, scheduling IO resources, and recording scheduling conditions in the respective resource linked lists.
In a preferred embodiment of the present invention, in response to failure of the local IO resource pool to apply for IO resources to the global IO resource pool, the global IO resource pool applies for IO resources to global IO resource pools of other scheduling domains and performs IO resource scheduling, and the step of recording scheduling conditions in respective resource linked lists includes:
responding to failure of applying IO resources from a local IO resource pool to a global IO resource pool, and applying IO resources from the global IO resource pool of other scheduling domains by the global IO resource pool;
the global IO resource pools of other scheduling domains transfer the corresponding IO resources to the global IO resource pools from which the application is sent, and the scheduling conditions are recorded in the resource linked lists of the two global IO resource pools respectively;
the global IO resource pool from which the application is sent allocates the obtained IO resources to the local IO resource pool from which the application is sent, and records allocation conditions in the global IO resource pool and a resource linked list in the local IO resource pool. A certain IO resource is obtained from a G_IO_Res pool in another domain to meet the resource request of the domain, specifically, a certain amount of IO resources are transferred from a free_list of the G_IO_Res pool in the other domain to a remote_free_list of the G_IO_Res pool in the domain, and the part of the IO resources are recorded in an allocated_list of the G_IO_Res in the other domain, wherein the free_list of the G_IO_Res pool in the domain has enough IO resources. And then transferring a certain amount of IO resources from the remote_free_list of the G_IO_Res pool in the domain to the remote_free_list of the L_IO_Res pool in the domain of the request module, and recording the part of the resources into the remote_allocated_list of the G_IO_Res in the domain.
In a preferred embodiment of the present invention, further comprising:
responding to the IO pressure of the scheduling domain to last for a first preset time to be lower than a preset value, and releasing IO resources in each local IO resource pool into a global IO resource pool;
and recording the release condition in a resource chain table in the local IO resource pool and the global IO resource pool. When the IO pressure of the scheduling domain drops in a continuous period of time and drops below a preset value, the service module gradually releases the extended IO resources, namely releases the IO resources which are extended on the L_IO_Res pool in the domain, and returns to the free_list or the remote_free_list of the G_IO_Res in the domain.
In a preferred embodiment of the present invention, further comprising:
responding to the IO resources released by the local IO resource pool as the IO resources of other scheduling domains, and releasing the corresponding IO resources into the global IO resource pool of the other scheduling domains by the global IO resource pool;
and recording the release condition in a resource chain table in the global IO resource pool and the global IO resource pools of other scheduling domains. If the IO resources on the free_list of the L_IO_Res are released, transferring to the free_list of the G_IO_Res; if the IO resource on the remote_free_list of the L_IO_Res is released, the method is transferred to the remote_free_list of the G_IO_Res, and accordingly, the allocated_list or the remote allocated_list of the G_IO_Res is updated.
In a preferred embodiment of the present invention, further comprising:
counting the number of unassigned IO resources in IO resources applied to global IO resource pools of other scheduling domains by a global IO resource pool in a resource chain table;
comparing the counted number with a set threshold;
releasing unallocated IO resources to a global IO resource pool of other scheduling domains within a second preset time in response to the counted number exceeding a set threshold;
and recording the release condition in a resource chain table of the global IO resource pool and the global IO resource pools of other scheduling domains.
In a preferred embodiment of the present invention, further comprising:
counting the number of allocated IO resources in IO resources applied to global IO resource pools of other scheduling domains by a global IO resource pool in a resource linked list;
releasing unassigned IO resources to a global IO resource pool of other scheduling domains within a fourth preset time in response to the counted number lasting for the third preset time being lower than a preset value;
and recording the release condition in a resource chain table of the global IO resource pool and the global IO resource pools of other scheduling domains. If the number of IO resources on the remote free list of G_IO_Res exceeds a certain threshold, the pool of G_IO_Res of another domain is timely given. If the remote_allocated_list of G_IO_Res is low for more than a certain time, the IO resources on the remote_free_list of G_IO_Res are all returned to the G_IO_Res pool of another domain. The process of returning to the g_io_res pool of another domain is specifically to transfer the IO resource on the remote_free_list of the g_io_res pool in the present domain back to the free_list of the g_io_res pool in another domain, update the allocated_list of the g_io_res pool in another domain, and remove the part of the resource record from the allocated_list. If the service module cannot borrow enough resources from another domain, the service module can only asynchronously wait for other modules to release, and after the conditions are met, the service module applies for the G_IO_Res pool in the domain.
In a preferred embodiment of the present invention, further comprising:
responding to the fact that the IO pressure of the local IO resource pool is continuously lower than a pressure threshold for a fifth preset time, and reducing the static quota of the local IO resource pool;
releasing IO resources corresponding to the reduced static quota to the global IO resource pool by the local IO resource pool, and recording release conditions in the local IO resource pool and a resource linked list of the global IO resource pool. If the IO path pressure on two domains is large in some scenes, each layer of service module can dynamically adjust the static quota of the L_IO_Res pool of the service module in the two domains. For example, the static quota of the L_IO_Res pool in the Domain with small IO pressure is reduced, and the static quota can release part of IO resources to the G_IO_Res pool in the Domain, so that when the side with large IO pressure needs to borrow more memory, enough memory can be borrowed rapidly.
In a preferred embodiment of the present invention, the resource linked list in each IO resource pool includes an IO resource linked list free of the resource pool, a resource linked list of the allocated IO resources in the resource pool, an IO resource linked list in a free state in the IO resources borrowed from the far end, and an IO resource linked list in an allocated state in the IO resources borrowed from the far end.
In a preferred embodiment of the present invention, further comprising:
counting the current IO pressure of each scheduling domain;
and in response to receiving the new IO task, issuing the IO task to a scheduling domain with the minimum current IO pressure for processing.
According to the technical scheme, the scheduling domain and the resource pool are divided according to the two-way architecture, each path corresponds to one scheduling domain and IO resources local to the scheduling domain, so that each path has independent task scheduling and resource management, namely relatively independent IO paths, the IO processing paths are increased, IO processing capacity is improved, the cross-path access of resources is reduced, and the overhead caused by cross-socket access of memory is reduced as a whole. Meanwhile, in order to adapt to the scene of double-path pressure imbalance, the invention also provides a self-adaptive adjustment strategy and method of IO resources when the double-path pressure imbalance exists, so as to meet different requirements of pressure at two sides as much as possible and ensure that the IO resources are in an optimal configuration.
It should be noted that, it will be understood by those skilled in the art that all or part of the procedures in implementing the methods of the above embodiments may be implemented by a computer program to instruct related hardware, and the above program may be stored in a computer readable storage medium, and the program may include the procedures of the embodiments of the above methods when executed. Wherein the storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like. The computer program embodiments described above may achieve the same or similar effects as any of the method embodiments described above.
Furthermore, the method disclosed according to the embodiment of the present invention may also be implemented as a computer program executed by a CPU, which may be stored in a computer-readable storage medium. When executed by a CPU, performs the functions defined above in the methods disclosed in the embodiments of the present invention.
With the above object in mind, in a second aspect of the embodiments of the present invention, there is provided an apparatus for storage system resource management, as shown in fig. 4, an apparatus 200 includes:
the system comprises a creation module, a scheduling module and a scheduling module, wherein the creation module is configured to create a separate scheduling domain for CPU and memory resources of each path of the storage system;
the setting module is configured to respectively establish a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively set a resource linked list in the global IO resource pool and each local IO resource pool;
the allocation module is configured to respond to system initialization, allocate IO resources in the global IO resource pool to corresponding local IO resource pools, and record allocation conditions in respective resource linked lists;
the application module is configured to respond to the fact that the local IO resource pool needs to increase IO resources, and the local IO resource pool applies IO resources to the global IO resource pool;
The scheduling module is configured to respond to the fact that the local IO resource pool applies IO resources to the global IO resource pool successfully, the global IO resource pool distributes corresponding IO resources to the local IO resource pool, and distribution conditions are recorded in resource chain tables of the global IO resource pool and the local IO resource pool.
Based on the above object, a third aspect of the embodiments of the present invention proposes a computer device. Fig. 5 is a schematic diagram of an embodiment of a computer device provided by the present invention. As shown in fig. 5, an embodiment of the present invention includes the following means: at least one processor 21; and a memory 22, the memory 22 storing computer instructions 23 executable on the processor, which when executed by the processor implement the above method.
Based on the above object, a fourth aspect of the embodiments of the present invention proposes a computer-readable storage medium. FIG. 6 is a schematic diagram illustrating one embodiment of a computer-readable storage medium provided by the present invention. As shown in fig. 6, the computer-readable storage medium 31 stores a computer program 32 that, when executed by a processor, performs the above method.
Furthermore, the method disclosed according to the embodiment of the present invention may also be implemented as a computer program executed by a processor, which may be stored in a computer-readable storage medium. The above-described functions defined in the methods disclosed in the embodiments of the present invention are performed when the computer program is executed by a processor.
Furthermore, the above-described method steps and system units may also be implemented using a controller and a computer-readable storage medium storing a computer program for causing the controller to implement the above-described steps or unit functions.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In one or more exemplary designs, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general purpose or special purpose computer or general purpose or special purpose processor. Further, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (13)

1. A method of storage system resource management, comprising the steps of:
creating a separate scheduling domain for each path of CPU and memory resources of the storage system;
respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively setting a resource linked list in the global IO resource pool and each local IO resource pool;
the step of respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain and setting a resource linked list in the global IO resource pool and each local IO resource pool comprises the following steps:
Creating a global IO resource pool in each scheduling domain, wherein the global IO resource pool is positioned on a physical memory directly connected with a CPU in an architecture to which the scheduling domain belongs;
respectively creating a plurality of local IO resource pools in each scheduling domain, and connecting each local IO resource pool to the corresponding global IO resource pool;
setting a resource linked list in the global IO resource pool and each local IO resource pool respectively for recording IO resource information;
responding to system initialization, distributing IO resources in the global IO resource pool to the corresponding local IO resource pools, and recording distribution conditions in the respective resource linked lists;
in response to a local IO resource pool needing to add IO resources, the local IO resource pool applies IO resources to the global IO resource pool;
responding to the local IO resource pool to apply IO resources to the global IO resource pool successfully, distributing corresponding IO resources to the local IO resource pool by the global IO resource pool, and recording distribution conditions in resource chain tables of the global IO resource pool and the local IO resource pool;
responding to failure of the local IO resource pool to apply IO resources to the global IO resource pool, applying IO resources to the global IO resource pools of other scheduling domains by the global IO resource pool, scheduling IO resources, and recording scheduling conditions in the respective resource linked lists.
2. The method of claim 1, wherein the steps of allocating IO resources in the global IO resource pool to corresponding local IO resource pools and recording allocation in the respective resource linked lists in response to system initialization include:
responding to system initialization, and applying for IO resources with corresponding quantity from the global IO resource pool according to static quota;
distributing the applied IO resources to the corresponding local IO resource pools;
and respectively recording the information of the allocated IO resources in the resource linked lists in the global IO resource pool and the local IO resource pool.
3. The method according to claim 1, wherein the step of, in response to the failure of the local IO resource pool to apply for IO resources to the global IO resource pool, the global IO resource pool applies for IO resources to global IO resource pools of other scheduling domains and performs IO resource scheduling, and records scheduling conditions in the respective resource linked lists includes:
responding to failure of applying IO resources from the local IO resource pool to the global IO resource pool, and applying IO resources from the global IO resource pool to global IO resource pools of other scheduling domains by the global IO resource pool;
The global IO resource pools of the other scheduling domains transfer corresponding IO resources to the global IO resource pools from which the application is sent, and the scheduling conditions are recorded in resource linked lists of the two global IO resource pools respectively;
the global IO resource pool from which the application is sent distributes the obtained IO resources to the local IO resource pool from which the application is sent, and the distribution condition is recorded in the global IO resource pool and the resource linked list in the local IO resource pool.
4. The method as recited in claim 1, further comprising:
responding to the IO pressure of the scheduling domain to last for a first preset time to be lower than a preset value, and releasing IO resources in each local IO resource pool into the global IO resource pool;
and recording release conditions in the resource linked lists in the local IO resource pool and the global IO resource pool.
5. The method as recited in claim 4, further comprising:
responding to the IO resources released by the local IO resource pool as IO resources of other scheduling domains, and releasing the corresponding IO resources into the global IO resource pool of the other scheduling domains by the global IO resource pool;
and recording release conditions in resource linked lists in the global IO resource pool and the global IO resource pools of other scheduling domains.
6. The method as recited in claim 1, further comprising:
counting the number of unassigned IO resources in IO resources applied to global IO resource pools of other scheduling domains by the global IO resource pools in a resource chain table;
comparing the counted number with a set threshold;
releasing the unassigned IO resources to a global IO resource pool of the other scheduling domains within a second preset time in response to the counted number exceeding the set threshold;
and recording release conditions in resource linked lists of the global IO resource pool and the global IO resource pools of the other scheduling domains.
7. The method as recited in claim 1, further comprising:
counting the number of allocated IO resources in IO resources applied to global IO resource pools of other scheduling domains by the global IO resource pools in a resource linked list;
releasing unassigned IO resources to a global IO resource pool of the other scheduling domains within a fourth preset time in response to the counted number lasting for a third preset time being lower than a preset value;
and recording release conditions in resource linked lists of the global IO resource pool and the global IO resource pools of the other scheduling domains.
8. A method according to claim 3, further comprising:
responding to the fact that the IO pressure of a local IO resource pool is continuously lower than a pressure threshold for a fifth preset time, and reducing the static quota of the local IO resource pool;
releasing IO resources corresponding to the reduced static quota to the global IO resource pool by the local IO resource pool, and recording release conditions in resource linked lists of the local IO resource pool and the global IO resource pool.
9. The method of claim 1, wherein the resource linked list in each of the IO resource pools comprises an IO resource linked list with a free resource pool, an IO resource linked list with an allocated resource pool, an IO resource linked list with a free status from among the IO resources borrowed from the far end, and an IO resource linked list with an allocated status from among the IO resources borrowed from the far end.
10. The method as recited in claim 1, further comprising:
counting the current IO pressure of each scheduling domain;
and in response to receiving a new IO task, issuing the IO task to a scheduling domain with the minimum current IO pressure for processing.
11. An apparatus for storage system resource management, the apparatus comprising:
A creation module configured to create separate scheduling domains for the CPU and memory resources of each way of the storage system;
the setting module is configured to respectively establish a global IO resource pool and a plurality of local IO resource pools in each scheduling domain, and respectively set a resource linked list in the global IO resource pool and each local IO resource pool;
the step of respectively establishing a global IO resource pool and a plurality of local IO resource pools in each scheduling domain and setting a resource linked list in the global IO resource pool and each local IO resource pool comprises the following steps:
creating a global IO resource pool in each scheduling domain, wherein the global IO resource pool is positioned on a physical memory directly connected with a CPU in an architecture to which the scheduling domain belongs;
respectively creating a plurality of local IO resource pools in each scheduling domain, and connecting each local IO resource pool to the corresponding global IO resource pool;
setting a resource linked list in the global IO resource pool and each local IO resource pool respectively for recording IO resource information;
the allocation module is configured to respond to system initialization, allocate IO resources in the global IO resource pool to the corresponding local IO resource pool, and record allocation conditions in the respective resource linked list;
The application module is configured to respond to the need of adding IO resources of a local IO resource pool, and the local IO resource pool applies IO resources to the global IO resource pool;
the scheduling module is configured to respond to the success of applying IO resources from the local IO resource pool to the global IO resource pool, the global IO resource pool distributes corresponding IO resources to the local IO resource pool and records distribution conditions in the global IO resource pool and a resource linked list of the local IO resource pool, the scheduling module is also configured to respond to the failure of applying IO resources from the local IO resource pool to the global IO resource pool, and the global IO resource pool applies IO resources to global IO resource pools of other scheduling domains and performs IO resource scheduling and records scheduling conditions in the respective resource linked lists.
12. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, which when executed by the processor, perform the steps of the method of any one of claims 1-10.
13. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1-10.
CN202311841280.4A 2023-12-28 2023-12-28 Method, device, equipment and medium for managing storage system resources Active CN117492967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311841280.4A CN117492967B (en) 2023-12-28 2023-12-28 Method, device, equipment and medium for managing storage system resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311841280.4A CN117492967B (en) 2023-12-28 2023-12-28 Method, device, equipment and medium for managing storage system resources

Publications (2)

Publication Number Publication Date
CN117492967A CN117492967A (en) 2024-02-02
CN117492967B true CN117492967B (en) 2024-03-19

Family

ID=89680374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311841280.4A Active CN117492967B (en) 2023-12-28 2023-12-28 Method, device, equipment and medium for managing storage system resources

Country Status (1)

Country Link
CN (1) CN117492967B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118069071B (en) * 2024-04-19 2024-08-13 苏州元脑智能科技有限公司 Resource access control method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706757A (en) * 2009-09-21 2010-05-12 中国科学院计算技术研究所 I/O system and working method facing multi-core platform and distributed virtualization environment
CN102497432A (en) * 2011-12-13 2012-06-13 华为技术有限公司 Multi-path accessing method for input/output (I/O) equipment, I/O multi-path manager and system
CN106681835A (en) * 2016-12-28 2017-05-17 华为技术有限公司 Resource allocation method and resource manager
CN111708631A (en) * 2020-05-06 2020-09-25 深圳震有科技股份有限公司 Data processing method based on multi-path server, intelligent terminal and storage medium
CN116340003A (en) * 2023-04-11 2023-06-27 国网信息通信产业集团有限公司北京分公司 Self-adaptive edge computing resource management method and system based on deep reinforcement learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220382591A1 (en) * 2021-05-27 2022-12-01 Vmware, Inc. Managing resource distribution in global and local pools based on a flush threshold

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706757A (en) * 2009-09-21 2010-05-12 中国科学院计算技术研究所 I/O system and working method facing multi-core platform and distributed virtualization environment
CN102497432A (en) * 2011-12-13 2012-06-13 华为技术有限公司 Multi-path accessing method for input/output (I/O) equipment, I/O multi-path manager and system
CN106681835A (en) * 2016-12-28 2017-05-17 华为技术有限公司 Resource allocation method and resource manager
CN111708631A (en) * 2020-05-06 2020-09-25 深圳震有科技股份有限公司 Data processing method based on multi-path server, intelligent terminal and storage medium
CN116340003A (en) * 2023-04-11 2023-06-27 国网信息通信产业集团有限公司北京分公司 Self-adaptive edge computing resource management method and system based on deep reinforcement learning

Also Published As

Publication number Publication date
CN117492967A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN103608792B (en) The method and system of resource isolation under support multicore architecture
CN117492967B (en) Method, device, equipment and medium for managing storage system resources
US8074041B2 (en) Apparatus, system, and method for managing storage space allocation
US8145873B2 (en) Data management method for network storage system and the network storage system built thereof
US10158579B2 (en) Resource silos at network-accessible services
US20100023564A1 (en) Synchronous replication for fault tolerance
CN111538586A (en) Cluster GPU resource management scheduling system, method and computer readable storage medium
CN110537169A (en) Cluster resource management in distributed computing system
CN104168323B (en) A kind of cloud service system and method
US8826367B2 (en) Elastic resource provisioning in an asymmetric cluster environment
CN104219279A (en) Modular architecture for extreme-scale distributed processing applications
CN111984191A (en) Multi-client caching method and system supporting distributed storage
CN105843559A (en) Read-write optimization method and system of disk cache system
WO2024060788A1 (en) Intelligent-computing-oriented adaptive adjustment system and method for pipeline-parallel training
CN112099728B (en) Method and device for executing write operation and read operation
CN106533961A (en) Flow control method and device
CN106326143A (en) Cache distribution, data access and data sending method, processor and system
JP2019095881A (en) Storage controller and program
CN116974489A (en) Data processing method, device and system, electronic equipment and storage medium
US10454846B2 (en) Managing multiple cartridges that are electrically coupled together
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
CN110413197A (en) Manage method, equipment and the computer program product of storage system
CN105094761A (en) Data storage method and device
CN108153489B (en) Virtual data cache management system and method of NAND flash memory controller
CN116737810B (en) Consensus service interface for distributed time sequence database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant