CN114816276A - Method for providing disk speed limit based on logical volume management under Kubernetes - Google Patents

Method for providing disk speed limit based on logical volume management under Kubernetes Download PDF

Info

Publication number
CN114816276A
CN114816276A CN202210747360.2A CN202210747360A CN114816276A CN 114816276 A CN114816276 A CN 114816276A CN 202210747360 A CN202210747360 A CN 202210747360A CN 114816276 A CN114816276 A CN 114816276A
Authority
CN
China
Prior art keywords
value
bps
iops
container
kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210747360.2A
Other languages
Chinese (zh)
Other versions
CN114816276B (en
Inventor
花磊
张凯
崔骥
付少松
赵安全
王亮
张振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Boyun Technology Co ltd
Original Assignee
Jiangsu Boyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Boyun Technology Co ltd filed Critical Jiangsu Boyun Technology Co ltd
Priority to CN202210747360.2A priority Critical patent/CN114816276B/en
Publication of CN114816276A publication Critical patent/CN114816276A/en
Application granted granted Critical
Publication of CN114816276B publication Critical patent/CN114816276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Abstract

The application relates to a method for providing disk speed limit based on logical volume management under Kubernetes, which belongs to the technical field of cloud computing and comprises the following steps: after the volume of the container mounted volume in the Kubernetes cluster is successfully carried out, if the equipment state is detected to be in operation, determining whether an iops value and a bps value need to be set or not; detecting a kernel version supported by a current node under the condition that an iops value and a bps value are required to be set; under the condition that the current node supports the cgroupv1 version, setting an iops value and a bps value of a kernel cgroupblkio subsystem; under the condition that the current node supports the cgroupv2 version, setting an iops value and a bps value of a kernel io subsystem; the speed limit iops and bps of the disk dynamically provided in the Kubernetes environment can be realized.

Description

Method for providing disk speed limit based on logical volume management under Kubernetes
Technical Field
The application relates to a method for providing disk speed limit based on logical volume management under Kubernetes, and belongs to the technical field of cloud computing.
Background
The cgroups (fully: control groups) is a kind of grouping management for processes provided by the Linux kernel; the mechanism for limiting or recording the resources used by the process is used for limiting, controlling and separating the resources (such as CPU, memory, disk input and output, etc.) of a process group. There are two large versions of cgroup, respectively: cgroupv1 version and cgroupv2 version.
The resources that cgroup primarily limits are: CPU, memory, network, disk I/O. When available system resources are allocated to a cgroup in a certain percentage, the remaining resources are available for other cgroups or other processes on the system. For the container technology, the limitation and isolation on the resource level are realized, and the cgroup technology provided by the Linux kernel is relied on. For io resources, io is mainly divided into buffer io and direct io, and the io limitation and isolation embodied on the disk mainly limit the amount of data read and written per second (bps) and the number of times of io per second (iops) of the disk.
On the kernel cgroupv1 version is controlled by the blkio subsystem, while cgroupv2 version is controlled by the io controller.
The most significant difference between the cgroupv2 version and cgroupv1 version is that the cgroupv1 version allows any number of hierarchies, but this can cause the following problems, in particular:
when mounting the cgroup hierarchy, a comma-separated list of subsystems to be mounted may be specified as a file system mount option. By default, mounting the cgroup file system will attempt to mount a hierarchy that contains all registered subsystems.
If an active hierarchy already exists with an identical set of subsystems, it will be reused for the new installation.
If the existing hierarchy does not match and any requesting subsystem is in use in the existing hierarchy, the mount will fail and-EBUSY is displayed. Otherwise, a new hierarchy associated with the requesting subsystem will be activated.
It is currently not possible to bind a new subsystem to or unbind a subsystem from the active cgroup hierarchy. When a cgroup file system is unloaded, if any child cgroups are created below the top level cgroup, the hierarchy will remain active even if unloaded; if there are no child cgroups, the hierarchy will be deactivated. This is a problem in the cgroupv1 version.
Whereas in the cgroupv2 version the above problem is solved. But until now most applications still use the cgroupv1 version of the interface. The grouping pv1 version cannot limit buffero due to the shortage of the grouping pv1 version in design, and the grouping pv2 can satisfy. In the kubernets environment, to limit the disks iops and bps, two versions of cgroup interfaces are required. And the existing disk based on containerization deployment cannot directly meet the requirement.
Disclosure of Invention
The application provides a method for providing a magnetic disk speed limit based on logical volume management under Kubernetes, which can be used for making the speed limit iops and bps for a magnetic disk dynamically provided under the Kubernetes environment; meanwhile, the speed limit of the block device can be supported, and the speed limit of a file system can also be supported. The application provides the following technical scheme:
the method for providing the disk speed limit based on the logical volume management under the Kubernetes comprises the following steps:
after the volume of the container mounted volume in the Kubernetes cluster is successfully carried out, if the equipment state is detected to be in operation, determining whether an iops value and a bps value need to be set or not;
detecting a kernel version supported by a current node under the condition that an iops value and a bps value are required to be set;
under the condition that the current node supports a cgroupv1 version, setting an iops value and a bps value of a kernel cgroupblkio subsystem;
and in the case that the current node supports the cgroupv2 version, setting an iops value and a bps value of the kernel io subsystem.
Optionally, the determining whether the iops value and the bps value need to be set includes:
determining whether the container is using logical volume management;
in the case where the container uses logical volume management, it is determined that an iops value and a bps value need to be set.
Optionally, the method further comprises:
after the container is created, if the bound PVC is found to be consistent with the csi-driver of the container, checking whether the annotation of the container exists;
triggering execution of the step of determining whether the container is using logical volume management if an annotation of the container exists.
Optionally, the setting of the iops value and the bps value of the kernel cgroupblkio subsystem or the setting of the iops value and the bps value of the kernel io subsystem includes:
querying the container using logical volume device number;
determining whether the value of the annotation setting of the container is consistent with the iops value and the bps value set by the kernel cgroup subsystem of the current node;
and if the values are not consistent, writing the values configured by the user into the kernel cgroup configuration parameters.
Optionally, the method further comprises:
creating a PVC application storage space in a Kubernets cluster, and creating a Deployment, wherein the Deployment comprises the user-configured value, and the user-configured value comprises annotations limiting iops and bps;
monitoring the size of a newly created Deployment and the size of a storage space applied by the Deployment, and scheduling and distributing nodes according to the size of the Deployment and the size of the storage space;
binding the container with the current node, adding annotation and node selection tags for the PVC;
the PV is created after the volume creation is successful.
Optionally, the cgroupv1 version supports block device speed limiting, and the cgroupv2 version supports block device and file system speed limiting.
The beneficial effect of this application includes at least: after the volume of the container mounted in the Kubernetes cluster is successfully carried out, if the equipment state is detected to be in operation, whether an iops value and a bps value need to be set is determined; detecting a kernel version supported by a current node under the condition that an iops value and a bps value are required to be set; under the condition that the current node supports the cgroupv1 version, setting an iops value and a bps value of a kernel cgroupblkio subsystem; under the condition that the current node supports the cgroupv2 version, setting an iops value and a bps value of a kernel io subsystem; the problem that the speed limit cannot be dynamically provided for the magnetic disk under the Kubernets environment can be solved; the speed limit iops and bps of a disk dynamically provided under a Kubernetes environment can be realized; meanwhile, the speed limit of the block device can be supported, and the speed limit of a file system can also be supported.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a method for providing disk speed limit based on logical volume management under Kubernetes according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for providing disk speed limit based on logical volume management under Kubernetes according to another embodiment of the present application;
FIG. 3 is a flow chart of a method for providing disk speed limit based on logical volume management under Kubernetes according to another embodiment of the present application;
fig. 4 is a flowchart of a method for providing disk speed limit based on logical volume management under Kubernetes according to yet another embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the present application will be made with reference to the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
First, a number of terms referred to in the present application will be described.
Container scheduling service (Kubernetes): abbreviation K8s is an abbreviation resulting from 8 replacing the 8 characters "kubernet" in the middle of the name. Kubernetes is an open source container orchestration engine that supports automated deployment, large-scale scalable, application containerization management. When an application is deployed in a production environment, multiple instances of the application are typically deployed to load balance application requests. In kubernets, multiple containers can be created, one application instance runs in each container, and then management, discovery and access to the group of application instances are realized through a built-in load balancing policy.
Kubernets is responsible for managing and scheduling container services. At present, a de facto standard Scheduler (Scheduler) in the cloud computing field for container scheduling and management is a component of kubernets, and is mainly responsible for scheduling Pod.
Kubelet: the core component of Kubernetes, which is a proxy component on the Kubernetes working node, runs on each node. The Kubelet periodically receives new or modified Pod specifications from the kube-apiserver component and ensures that the Pod and its container operate under the desired specifications. Meanwhile, the component serves as a monitoring component of the working node and reports the running condition of the host to the kube-apiserver. In other words, Kubelet is responsible for the running state of each node (i.e., ensuring that all containers on the node are running properly). It handles starting, stopping and maintaining the application container Pod as instructed by the control panel.
The kube-api component provides HTTPRest interfaces such as add, delete, check and monitor (watch) of k8s resource objects (container set (Pod), copy controller (RC), Service, etc.), and is a data bus and a data center of the whole system.
Container set (Pod): is the smallest unit managed by Kubernetes, and a plurality of containers are combined together to be called Pod.
Storage Volume (Volume): the container is stateless in the server and requires a mount storage volume to save data to a local disk.
Configmap is an API object used to store non-confidential data into key value pairs.
DaemonSet: is an API object that ensures that a copy of a Pod is running on all (or some) nodes.
Carina (carina): items of storage volumes are provided for Pod in a kubernets cluster.
Logical Volume Manager (LVM): the LVM is a logic layer established on a hard disk and a partition to improve the flexibility of disk partition management.
Magnetic disk is the storage medium on computer.
Raw: the type definition for direct storage using physical disk partitioning is used, as distinguished from the LVM approach.
Partitioning: one disk is logically divided into several areas, and each area is used as an independent hard disk so as to be convenient to manage.
And (3) node: physical hosts that make up a Kubernetes cluster.
The Logicvolume is a resource object customized in a Kubernets environment and used for recording the applied storage resource information.
NodeStaorageResource is a resource object customized in a Kubernetes environment and used for recording node disk information.
A resource object of a Container Storage Interface (CSI) plug-in a Kubernetes environment, and is used for realizing the self-defining plug-in.
And the controller module is used for controlling services by an application program in the program.
And a node module, wherein the node module is used for the service of the application program running on each node.
Deployments (deployments): the system is a Pod controller (with various controllers), is a stateless service (web micro-service can be deployed), and has the functions of online deployment, rolling upgrade, copy creation, rollback to a certain previous version (success/stability) and the like
Persistentvolume (PV) storage volume information created in Kubernets. The abstraction of underlying network shared storage defines shared storage as a resource. A PV is a piece of network storage in a cluster configured by an administrator. It is a resource in the cluster as if the node were a cluster resource. PV is a capacity insert, such as Volumes, but its life cycle is independent of any single Pod using PV. The API object captures detailed information of storage implementation, including Network File System (NFS), Internet Small Computer System Interface (iSCSI), or a storage System specific to a cloud provider.
PersistentVolumeClaim (PVC): the creation of a storage volume in kubernets requires the creation of a PVC resource application. Is a request for storage by a user. It is similar to Pod. Pod consumes node resources and PVC consumes PV resources. The Pod may request a particular level of resources (CPU and memory). A claim may request a particular size and access mode (e.g., may be read/write once or read only multiple times).
StorageClass: for marking the characteristics and performance of the storage resources. The StorageClass provides administrators with a way to describe the "classes" of storage they provide. Different classes may map to quality of service levels, or backup policies, or arbitrary policies determined by the cluster administrator. Kubernets itself is self-evident for what category is represented. This concept is sometimes referred to as a "profile" in other storage systems.
Fig. 1 is a flowchart of a method for providing a disk speed limit based on logical volume management under kubernets according to an embodiment of the present application, where the method at least includes the following steps:
step 101, after the volume of the container mounted in the kubernets cluster is successfully carried out, if the device state is detected to be running, whether an iops value and a bps value need to be set is determined.
Before disk speed limiting, a PVC application storage space needs to be created in a Kubernets cluster in the Kubernets cluster, and a Delpoymet is created, wherein the Delpoymet comprises a user configured value, and the user configured value comprises an annotation for limiting iops and bps; monitoring the newly created Deployment and the size of the storage space applied by the Deployment, and scheduling and distributing nodes according to the size of the Deployment and the size of the storage space; binding the container with the current node, adding annotation and node selection tags for the PVC; the PV is created after the volume creation is successful.
Specifically, referring to fig. 2, after creating a PVC and a Deployment in a Kubernetes cluster, a scheduler in the Kubernetes cluster finds the newly created Deployment and the size of the disk capacity space applied by the PVC, and schedules and allocates nodes; and binding the created container with the node, adding annotation and node selection tags for the PVC. The control module creates a volume after monitoring the creation of the PVC; after the volume succeeds, the PV is created, and then container creation and storage volume creation are performed.
And then, after monitoring the creation of the container, the node module acquires the volume group device number, hangs the volume at the corresponding device address, and successfully hangs the volume.
After the volume of the mounted volume is successful, the node module finds that the bound PVC is consistent with the csi-driver of the container, and then checks whether the annotation of the container exists; in the event that an annotation of a container exists, determining whether the container is to use logical volume management; in the case where the container uses logical volume management, it is determined that the iops value and the bps value need to be set.
In the present application, only logical partitions under the LVM can support setting the limiting bps and iops functions. The limiting disk partitions iops and bps are not supported when using bare disk mode. The specific reason is as follows: under the partition, the kernel cgroup finds that the partition is not the block device itself, and the error is reported. The logical layer is implemented when using LVM mode, and the blkio subsystem can support device mapping (devicemap) devices, thereby implementing the ability to limit disk bps and iops based on LVM.
Step 102, under the condition that the iops value and the bps value need to be set, detecting the kernel version supported by the current node, and executing step 103 or 104.
And 103, setting an iops value and a bps value of the kernel cgroupblkio subsystem under the condition that the current node supports the cgroupv1 version.
And step 104, setting an iops value and a bps value of the kernel io subsystem under the condition that the current node supports the cgroupv2 version.
Among them, the cgroupv1 version supports block device speed limiting, and the cgroupv2 version supports block device and file system speed limiting.
Referring to fig. 2, the process of setting the kernel iops and bps values includes: the node module inquires the number of the logical volume device used by the container; determining whether the value of the annotation setting of the container is consistent with the iops value and the bps value set by the kernel cgroup subsystem of the current node; and if not, writing the value configured by the user into the kernel cgroup configuration parameter.
Specifically, referring to fig. 3, after the PVC and the container are created, the node module checks the kernel version supported by the current node, and if the cgroupv1 version is supported, writes the value 31 configured by the user into the kernel cgroupblkio subsystem; if cgroupv2 version is supported, then the user-configured value 32 is written to the kernel io subsystem.
Optionally, the value configured by the user may further include a device number limited by the node, and at this time, the device number configured by the user is further written into the kernel cgroup configuration parameter when the device number of the logical volume is not consistent with the device number configured by the user. And then updating the kernel cgroupblkio subsystem or updating the kernel io subsystem.
In order to more clearly understand the disk speed limiting method provided by the embodiment, referring to fig. 4, the method is described as an example. After the node detects that the state is in operation, screening a container created by the node from a container group, and judging whether the container has iops and bps annotations; if not, the disk speed limit processing is not carried out, and the process is ended.
If the iops and bps annotations exist, checking the kernel version supported by the current node; if the cgroupv1 version is supported, monitoring container annotation changes; if there is a change, then the user configured value is written to the kernel cgroupblkio subsystem. The method specifically comprises the steps of reading only the iops value and the bps value and writing only the iops value and the bps value, and the process is ended.
If the cgroupv2 version is supported, monitoring container annotation changes; if there is a change, the user configured value is written to the kernel io subsystem. The method specifically comprises the steps of reading only the iops value and the bps value, writing only the iops value and the bps value, reading and writing the iops value and the bps value, and ending the process.
In summary, in the method for providing a disk speed limit based on logical volume management under kubernets according to this embodiment, after a volume is successfully mounted on a container in a kubernets cluster, if it is detected that a device state is in operation, it is determined whether an iops value and a bps value need to be set; detecting a kernel version supported by a current node under the condition that an iops value and a bps value are required to be set; under the condition that the current node supports the cgroupv1 version, setting an iops value and a bps value of a kernel cgroupblkio subsystem; under the condition that the current node supports the cgroupv2 version, setting an iops value and a bps value of a kernel io subsystem; the problem that the speed limit cannot be dynamically provided for the magnetic disk under the Kubernets environment can be solved; the speed limit iops and bps of a disk dynamically provided under a Kubernetes environment can be realized; meanwhile, the speed limit of the block device can be supported, and the speed limit of a file system can also be supported.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A method for providing disk speed limit based on logical volume management under Kubernetes, the method comprising:
after the volume of the container mounted volume in the Kubernetes cluster is successfully carried out, if the equipment state is detected to be in operation, determining whether an iops value and a bps value need to be set or not;
detecting a kernel version supported by a current node under the condition that an iops value and a bps value are required to be set;
under the condition that the current node supports a cgroupv1 version, setting an iops value and a bps value of a kernel cgroupblkio subsystem;
and in the case that the current node supports the cgroupv2 version, setting an iops value and a bps value of the kernel io subsystem.
2. The method of claim 1, wherein the determining whether the iops and bps values need to be set comprises:
determining whether the container is using logical volume management;
in the case where the container uses logical volume management, it is determined that the iops value and the bps value need to be set.
3. The method of claim 2, further comprising:
after the container is created, if the bound PVC is found to be consistent with the csi-driver of the container, checking whether the annotation of the container exists;
triggering execution of the step of determining whether the container is using logical volume management if an annotation of the container exists.
4. The method of claim 1, wherein the setting of the iops value and the bps value of the kernel cgroupblkio subsystem or the setting of the iops value and the bps value of the kernel io subsystem comprises:
querying the container usage logical volume device number;
determining whether the value of the annotation setting of the container is consistent with the iops value and the bps value set by the kernel cgroup subsystem of the current node;
and if not, writing the value configured by the user into the kernel cgroup configuration parameter.
5. The method of claim 4, further comprising:
creating a PVC application storage space in a Kubernets cluster, and creating a Deployment, wherein the Deployment comprises the user-configured value, and the user-configured value comprises annotations limiting iops and bps;
monitoring the size of a newly created Deployment and the size of a storage space applied by the Deployment, and scheduling and distributing nodes according to the size of the Deployment and the size of the storage space;
binding the container with the current node, adding annotation and node selection tags for the PVC;
the PV is created after the volume creation is successful.
6. The method according to any one of claims 1 to 5, wherein the cgroupv1 version supports block device speed limit, and the cgroupv2 version supports block device and file system speed limit.
CN202210747360.2A 2022-06-29 2022-06-29 Method for providing disk speed limit based on logical volume management under Kubernetes Active CN114816276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210747360.2A CN114816276B (en) 2022-06-29 2022-06-29 Method for providing disk speed limit based on logical volume management under Kubernetes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210747360.2A CN114816276B (en) 2022-06-29 2022-06-29 Method for providing disk speed limit based on logical volume management under Kubernetes

Publications (2)

Publication Number Publication Date
CN114816276A true CN114816276A (en) 2022-07-29
CN114816276B CN114816276B (en) 2022-09-23

Family

ID=82522674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210747360.2A Active CN114816276B (en) 2022-06-29 2022-06-29 Method for providing disk speed limit based on logical volume management under Kubernetes

Country Status (1)

Country Link
CN (1) CN114816276B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301022A (en) * 2017-06-27 2017-10-27 北京溢思得瑞智能科技研究院有限公司 A kind of storage access method and system based on container technique
CN111913665A (en) * 2020-07-30 2020-11-10 星辰天合(北京)数据科技有限公司 Mounting method and device of storage volume and electronic equipment
CN113110918A (en) * 2021-05-13 2021-07-13 广州虎牙科技有限公司 Read-write rate control method and device, node equipment and storage medium
CN114661419A (en) * 2022-03-25 2022-06-24 星环信息科技(上海)股份有限公司 Service quality control system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301022A (en) * 2017-06-27 2017-10-27 北京溢思得瑞智能科技研究院有限公司 A kind of storage access method and system based on container technique
CN111913665A (en) * 2020-07-30 2020-11-10 星辰天合(北京)数据科技有限公司 Mounting method and device of storage volume and electronic equipment
CN113110918A (en) * 2021-05-13 2021-07-13 广州虎牙科技有限公司 Read-write rate control method and device, node equipment and storage medium
CN114661419A (en) * 2022-03-25 2022-06-24 星环信息科技(上海)股份有限公司 Service quality control system and method

Also Published As

Publication number Publication date
CN114816276B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
US9971823B2 (en) Dynamic replica failure detection and healing
US11500814B1 (en) Chain file system
US10860385B2 (en) Method and system for allocating and migrating workloads across an information technology environment based on persistent memory availability
US9535739B2 (en) Virtual machine storage
US11936731B2 (en) Traffic priority based creation of a storage volume within a cluster of storage nodes
EP3039575B1 (en) Scalable distributed storage architecture
US20130305243A1 (en) Server system and resource management method and program
US20210405902A1 (en) Rule-based provisioning for heterogeneous distributed systems
US7194594B2 (en) Storage area management method and system for assigning physical storage areas to multiple application programs
WO2012039053A1 (en) Method of managing computer system operations, computer system and computer-readable medium storing program
US11199972B2 (en) Information processing system and volume allocation method
US20220057947A1 (en) Application aware provisioning for distributed systems
CN110825704A (en) Data reading method, data writing method and server
CN101159596A (en) Method and apparatus for deploying servers
US20070174836A1 (en) System for controlling computer and method therefor
CN114840148B (en) Method for realizing disk acceleration based on linux kernel bcache technology in Kubernets
CN114816272B (en) Magnetic disk management system under Kubernetes environment
CN114816276B (en) Method for providing disk speed limit based on logical volume management under Kubernetes
US20230029380A1 (en) System and method of multilateral computer resource reallocation and asset transaction migration and management
US20220318042A1 (en) Distributed memory block device storage
JP6244496B2 (en) Server storage system management system and management method
CN113687935A (en) Cloud native storage scheduling mode based on super-fusion design
EP4260185A1 (en) System and method for performing workloads using composed systems
US11675678B1 (en) Managing storage domains, service tiers, and failed service tiers
US11663096B1 (en) Managing storage domains, service tiers and failed storage domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant