CN117931421A - Research method of exclusive CPU core resource management model in cluster - Google Patents

Research method of exclusive CPU core resource management model in cluster Download PDF

Info

Publication number
CN117931421A
CN117931421A CN202311702996.6A CN202311702996A CN117931421A CN 117931421 A CN117931421 A CN 117931421A CN 202311702996 A CN202311702996 A CN 202311702996A CN 117931421 A CN117931421 A CN 117931421A
Authority
CN
China
Prior art keywords
cpu
exclusive
core
node
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311702996.6A
Other languages
Chinese (zh)
Inventor
李启蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202311702996.6A priority Critical patent/CN117931421A/en
Publication of CN117931421A publication Critical patent/CN117931421A/en
Pending legal-status Critical Current

Links

Abstract

The invention belongs to the technical field of emerging information, and discloses a research method of an exclusive CPU core resource management model in a cluster, which comprises the following steps: s01: isolating the corresponding CPU core number from the corresponding node, and reserving CPU exclusive core expansion resources for the node by taking the corresponding CPU core number as a node CPU core resource partition pool of the current node; s02: releasing new CPU exclusive core extension resources on the node, and registering the CPU exclusive core resources into the k8s system; s03: the method comprises the steps of realizing the allocation of CPU exclusive core resources by defining the resource request and the limitation of Pod, and allocating and binding the CPU exclusive core resources for service Pod on a node; the invention can enable the exclusive core resources managed by the cluster on the node to have more thorough isolation.

Description

Research method of exclusive CPU core resource management model in cluster
Technical Field
The invention belongs to the technical field of emerging information, and particularly relates to a research method of an exclusive CPU core resource management model in a cluster.
Background
Currently, k8s regards all CPU resources on a node as a large shared resource pool (kube-reserved and system-reserved are not configured), a scheduler regards all CPU resources as available resources for scheduling work processes or threads, CPU throttling, context switching and CPU caching may periodically preempt executing processes or threads, which is beneficial in that multitasking would be more efficient CPU resource utilization, but is very disadvantageous for delay-sensitive workloads, the solution to optimizing the performance of these workloads is to isolate a CPU or a group of CPUs from the kernel scheduler and delay-sensitive work-minus bindings to be performed only on the isolated CPU set, and then the service has exclusive access to the isolated group of CPU sets, which eliminates context switching and CPU throttling caused by thread preemption.
K8s supports the limitation of resources at the level of a container, pod and Namespace, but has the advantages of overstock of the resources, sharing of bottom-layer CPU and memory resources, and influencing the stability of key class services after the resources are squeezed; k8s also supports the monopolization of CPU resources by pod, but also has the following problems:
1. k8s allocates exclusive core resources to the pod, but the system service on the node cannot guarantee the use of the CPU exclusive core resources, and complete isolation is not achieved;
2. k8s does not support a set of services or some pods share several CPU cores without being affected by other services.
In order to solve the two problems, the patent provides a more thorough resource isolation method in the k8s CPU resource management method, which can ensure that CPU resource isolation is more thorough for computing resource sensitive services, namely the binding of the services to CPU exclusive cores can be more thoroughly isolated, and k8s cluster nanotubes and scheduling can be performed; thereby avoiding influencing key class business when cluster resources are squeezed.
Disclosure of Invention
The invention aims to provide a research method of an exclusive CPU core resource management model in a cluster, which is used for solving the technical problems in the background technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a research method of an exclusive CPU core resource management model in a cluster comprises the following steps:
S01: isolating the corresponding CPU core number from the corresponding node, and reserving CPU exclusive core expansion resources for the node by taking the corresponding CPU core number as a node CPU core resource partition pool of the current node;
s02: releasing new CPU exclusive core extension resources on the node, and registering the CPU exclusive core resources into the k8s system;
S03: the method comprises the steps of realizing the allocation of CPU exclusive core resources by defining the resource request and the limitation of Pod, and allocating and binding the CPU exclusive core resources for service Pod on a node;
S04: and clearing the CPU exclusive core resource on the node from the capacity of the node through the PATCH request.
Preferably, the node CPU core resource partition pool comprises an isolated CPU pool and kube-reserved.
Preferably, the partition pool of the CPU core resources of the node in S01 is an isolated CPU pool, the isolated CPU pool is realized by registering the CPU exclusive core resources reserved by each node, the CPU exclusive core resources are registered in K8S, and when the K8S has the exclusive core information of the node, the workload can be scheduled to the node.
Preferably, the CPU resource value reserved by kube-reserved for the system daemon is configured in kubelet of the k8s system to describe the resource reservation value for the k8s system daemon.
Preferably, in S02, the registration of the CPU exclusive core resource to the k8S system is implemented by a CPU manager, which is also responsible for collecting the state of the CPU exclusive core resource on each node, and issuing and modifying the CPU exclusive core resource on the node.
Preferably, the specific implementation of issuing new CPU exclusive core extension resources on the node and registering the CPU exclusive core resources in the k8s system is as follows: when CPU exclusive core resources are registered in a k8s system, k8s has CPU core information of each node and corresponding CPU topology information; when a user creates a workload, designating isocpu exclusive core information which is the same as requests and limit and is an integer, and k8s scheduling the workload to a node with CPU exclusive core resources based on the CPU core information owned by the previous step; the agent standing on the node monitors the creation of the pod resource, and when isocpu exclusive core resources are designated in the pod, the agent allocates unused exclusive cores isocpu to the container from small to large according to the number by calling runtime interfaces when the container runs; after the exclusive core resource is allocated, the agent resident process moves the allocated CPU exclusive core resource to the used core list, reports the use state of the CPU exclusive core resource to a CPU manager, and the CPU manager updates the CPU exclusive core resource list of the node to k8s again.
Preferably, in S03, allocating and binding the CPU exclusive core resource for the service Pod on the node is implemented by a agent, which is responsible for managing the CPU core resource partition pool on each node, reporting the use state of the CPU exclusive core resource to the CPU manager, and allocating the reported state to the CPU manager.
Preferably, in S03, when Pod is deleted or rescheduled, the agent is responsible for recovering the CPU exclusive core resource, and periodically reporting the status of the CPU exclusive core resource on the node.
Preferably, in S03, the agent is responsible for acquiring the CPU exclusive core resources from the isolated CPU pool, and numbering and managing, allocating a single or multiple CPU exclusive core resources to one pod, allocating a single CPU exclusive core resource to a group of pods, and allocating multiple CPU exclusive core resources to multiple pods, so as to implement a binding relationship of single to single, single to multiple, multiple to single and multiple to multiple.
Preferably, the binding relationship of single to single, single to many, many to single and many to many is as follows:
The single-pair single-binding is to only allocate the CPU exclusive core resource of the current number to the POD corresponding to the current number, namely, allocate the binding cgroup cpuset before the container is started, and endow CPU affinity and monopolization to the Pod on the node, wherein the monopolization is realized by using a cgroup cpuset controller;
the single-to-multiple binding is to bind a plurality of CPU exclusive core resources with PODs with corresponding numbers;
the multi-pair single binding is to bind one CPU exclusive core resource with a plurality of PODs, so that the plurality of PODs share the current CPU exclusive core resource;
the many-to-many method is to bind a plurality of CPU exclusive core resources and a plurality of PODs, realize that a plurality of PODs share a group of CPU exclusive core resources, and write the group of CPU lists into cgroup cpuset of each POD at the same time.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. The invention can make the exclusive core resource managed by the cluster on the node possess more thorough isolation, and the exclusive core can not be used by the system service and the operating system on the node; and k8s can schedule a set of services or pod to a certain cpu core or cores, realizing the sharing of the set of services or pod to the cores without being affected by other services.
2. The invention can solve the binding of the delay sensitive workload and the CPU exclusive core in the cluster and improve the service stability; meanwhile, single-to-single, single-to-many, many-to-single and many-to-many binding of the exclusive CPU and the service can be realized, the system performance, the efficiency and the response speed can be improved, and the system resources are fully utilized. The method and the system can be suitable for different scenes and task demands, thereby meeting different business demands, helping enterprises to reduce cost, improve efficiency, enhance business competitiveness and the like.
3. The invention can effectively make partition pools for CPU cores on the nodes, solves the problem of using different types of CPUs by various services, saves computing resources and does not influence key services; the invention has strong expansibility, and can be incorporated into custom resources such as a management megapage, a display core, an encryption sub-card, a virtual network card and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flow chart of the present invention;
FIG. 2 is a diagram showing an architecture of an exclusive CPU core resource management model of the present invention
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1 and 2, a method for researching an exclusive CPU core resource management model in a cluster includes the following steps:
S01: isolating the corresponding CPU core number from the corresponding node, and reserving CPU exclusive core expansion resources for the node by taking the corresponding CPU core number as a node CPU core resource partition pool of the current node;
The node CPU core resource partition pool comprises an isolated CPU pool and kube-reserved, wherein the isolated CPU pool is realized by registering CPU exclusive core resources reserved by each node, the CPU exclusive core resources are registered in K8s, and when the K8s has exclusive core information of the node, the workload can be scheduled to the node; kube-reserved CPU resource values reserved for the system daemon, configured in kubelet of the k8s system, are used to describe the resource reservation values for the k8s system daemon such as kubelet, container runtime, node problem monitor, etc.; the node CPU core resource partition pool of the current node is an isolated CPU pool;
specifically, reserving CPU exclusive core expansion resources for the node is realized by the following steps:
first, confirm the number of CPU cores available: determining the number of CPU cores to be isolated according to CPU information in the system; typically, the number of cores to be isolated is determined by the number of logical processors (threads), for example, if the system has 16 logical processors (threads), if only 8 cores are to be isolated as independent resource pools, then cores other than cpu0 through cpu7 need to be selected;
Secondly, loading a CPU isolation module: the CPU isolation module is loaded in the terminal using the following commands: sudo modprobe cpuset; then, create an independent CPU resource pool: creating a new cpuset subsystem directory under the/sys/fs/cgroup/cpuset directory, e.g., mycpuset; sudo mkdir/sys/fs/cgroup/cpuset/mycpuset;
Adding CPU cores to a resource pool: adding CPU cores to be isolated into a 'mycpuset' resource pool; for example, CPU cores 2 and 3 are added to the resource pool:
sudo echo 2,3>/sys/fs/cgroup/cpuset/mycpuset/cpuset.cpus;
Limiting other processes to use the resource pool: using the 'mycpuset' resource pool, only the designated processes are allowed to use the CPU cores; for example, the process in the current terminal is moved to a 'mycpuset' resource pool;
sudo echo$$>/sys/fs/cgroup/cpuset/mycpuset/tasks;
Finally, the isolated CPU core is verified: verifying whether the current process is running on the isolated CPU core using the 'cat/proc/self/status' command;
The exclusive core resources managed by the cluster on the node have more thorough isolation, and neither system services nor operating systems on the node can use the exclusive core.
S02: releasing new CPU exclusive core extension resources on the node, and registering the CPU exclusive core resources into the k8s system;
specifically, the registration of the CPU exclusive core resources to the k8s system is realized through a CPU manager, and the CPU manager is also responsible for collecting the state of the CPU exclusive core resources on each node, issuing and modifying the CPU exclusive core resources on the node;
The specific implementation is as follows: when CPU exclusive core resources are registered in a k8s system, k8s has CPU core information of each node and corresponding CPU topology information; when a user creates a workload, designating isocpu exclusive core information which is the same as requests and limit and is an integer, and k8s scheduling the workload to a node with CPU exclusive core resources based on the CPU core information owned by the previous step; the agent standing on the node monitors the creation of the pod resource, and when isocpu exclusive core resources are designated in the pod, the agent allocates unused exclusive cores isocpu to the container from small to large according to the number by calling runtime interfaces when the container runs; after the exclusive core resource is allocated, the agent resident process moves the allocated CPU exclusive core resource to a used core list, reports the use state of the CPU exclusive core resource to a CPU manager, and the CPU manager updates the CPU exclusive core resource list of the node to k8s again;
S03: the method comprises the steps of realizing the allocation of CPU exclusive core resources by defining the resource request and the limitation of Pod, and allocating and binding the CPU exclusive core resources for service Pod on a node;
Specifically, allocating and binding CPU exclusive core resources for service Pod on nodes is realized by a agent, and the agent is responsible for managing a CPU core resource partition pool on each node, reporting the use state of the CPU exclusive core resources to a CPU manager, and allocating the reported state to the CPU manager; when Pod is deleted or rescheduled, the agent is responsible for recovering the CPU exclusive core resource and periodically reporting the state of the CPU exclusive core resource on the node;
Specifically, the agent is responsible for acquiring CPU exclusive core resources from the isolated CPU pool, numbering and managing, distributing single or multiple CPU exclusive core resources to one pod, distributing single CPU exclusive core resources to a group of pods, distributing multiple CPU exclusive core resources to multiple pods, and realizing the binding relationship of single to single, single to multiple, multiple to single and multiple to multiple;
further, the binding relationship of single to single, single to many, many to single and many to many is specifically as follows:
The single-pair single-binding is to only allocate the CPU exclusive core resource of the current number to the POD corresponding to the current number, namely, allocate the binding cgroup cpuset before the container is started, and endow CPU affinity and monopolization to the Pod on the node, wherein the monopolization is realized by using a cgroup cpuset controller; this exclusivity is achieved using cgroup cpuset controllers, such as:
echo 2>/sys/fs/cgroup/cpuset/cpuset.cpus;
The single-to-multiple binding is to bind a plurality of CPU exclusive core resources with PODs with corresponding numbers; such as binding cores 3 and 4 with the POD, echo 3-4 >/sys/fs/cgroup/cpuset/cpu.cpu;
the multi-pair single binding is to bind one CPU exclusive core resource with a plurality of PODs, so that the plurality of PODs share the current CPU exclusive core resource;
The many-to-many method is to bind a plurality of CPU exclusive core resources and a plurality of PODs to realize that a plurality of PODs share a group of CPU exclusive core resources, and write the group of CPU lists into cgroup cpuset of each POD simultaneously
S04: clearing CPU exclusive core resources on the node from the capacity of the node through PATCH request;
the method specifically comprises the following steps:
first, the name of the node is determined: the name of the node that is to clear the CPU's exclusive core resource is determined. You can list the nodes in the cluster using the following commands:
kubectl get nodes;
then, create a PATCH request file: creating a PATCH request containing CPU core information to be cleaned; for example, a file named patch-request. Json is created and the CPU core to be cleaned is specified therein;
Finally, a PATCH request is sent: the PATCH request is sent using the following commands, the PATCH request file is applied to the node:
kubectl patch node < node name > -latch "$ (cat latch-request. Json)";
The < node name > is replaced with the actual node name and the PATCH-request. Json is replaced with the path of the PATCH request file you created.
Example 2
Assuming that the user needs to use the model to allocate 2 exclusive cores for one service pod, the key flow is the following steps:
Step 1: reserving CPU exclusive core expansion resources for the node;
for example, isolate cpu cores 1-8 from a node as isolation cpu pool of that node:
sudo sed-i's/GRUB_CMDLINE_LINUX="\(.*\)"/GRUB_CMDLINE_LINUX="\1
isolcpus=1-8"/'/etc/default/grub
sudo update-grub
reboot
Step 2: releasing new CPU exclusive core expansion resources on the node;
The PATCH request tells the Kubernetes that the node has four resources called isocpu.
Step 3: allocating CPU exclusive core expansion resource for service pod on node
Step 4: cleaning exclusive core resources on a node
Implementation of the above steps requires the k8s system and the CPU manager (cpumanager) to co-operate with the isolated CPU agent (isolate CPU agent).
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. A research method of a management model of exclusive CPU core resources in a cluster is characterized by comprising the following steps:
S01: isolating the corresponding CPU core number from the corresponding node, and reserving CPU exclusive core expansion resources for the node by taking the corresponding CPU core number as a node CPU core resource partition pool of the current node;
s02: releasing new CPU exclusive core extension resources on the node, and registering the CPU exclusive core resources into the k8s system;
S03: the method comprises the steps of realizing the allocation of CPU exclusive core resources by defining the resource request and the limitation of Pod, and allocating and binding the CPU exclusive core resources for service Pod on a node;
S04: and clearing the CPU exclusive core resource on the node from the capacity of the node through the PATCH request.
2. The method of claim 1, wherein the node CPU core resource partition pool comprises an isolated CPU pool and kube-reserved.
3. The method for researching an exclusive CPU core resource management model in a cluster as claimed in claim 2, wherein the node CPU core resource partition pool in S01 is an isolated CPU pool, the isolated CPU pool is implemented by registering the reserved CPU exclusive core resource of each node, the CPU exclusive core resource is registered in K8S, and when K8S has exclusive core information of a node, a workload can be scheduled to the node.
4. A method of studying an exclusive CPU core resource management model in a cluster according to claim 3, wherein the kube-reserved CPU resource value reserved for the system daemon is configured in kubelet of the k8s system to describe the resource reservation value for the k8s system daemon.
5. The method for researching an exclusive CPU core resource management model in a cluster as claimed in claim 1 or 4, wherein in S02, registering the CPU exclusive core resource in the k8S system is implemented by a CPU manager, and the CPU manager is further responsible for collecting the status of the CPU exclusive core resource on each node, and issuing and modifying the CPU exclusive core resource on the node.
6. The method for researching an exclusive CPU core resource management model in a cluster as claimed in claim 5, wherein said issuing new CPU exclusive core extension resources on the node and registering the CPU exclusive core resources in the k8s system is implemented as follows:
When CPU exclusive core resources are registered in a k8s system, k8s has CPU core information of each node and corresponding CPU topology information;
When a user creates a workload, designating isocpu exclusive core information which is the same as requests and limit and is an integer, and k8s scheduling the workload to a node with CPU exclusive core resources based on the CPU core information owned by the previous step;
The agent standing on the node monitors the creation of the pod resource, and when isocpu exclusive core resources are designated in the pod, the agent allocates unused exclusive cores isocpu to the container from small to large according to the number by calling runtime interfaces when the container runs;
After the exclusive core resource is allocated, the agent resident process moves the allocated CPU exclusive core resource to the used core list, reports the use state of the CPU exclusive core resource to a CPU manager, and the CPU manager updates the CPU exclusive core resource list of the node to k8s again.
7. The method for studying an exclusive CPU core resource management model in a cluster as claimed in claim 6, wherein in S03, allocating and binding CPU exclusive core resources for service Pod on nodes is implemented by a agent, the agent is responsible for managing a CPU core resource partition pool on each node, reporting the use state of the CPU exclusive core resources to a CPU manager, and allocating the reported state to the CPU manager.
8. The method for studying an exclusive CPU core resource management model in a cluster as claimed in claim 7, wherein in S03, when Pod is deleted or rescheduled, the agent is responsible for recovering the CPU exclusive core resource and periodically reporting the status of the CPU exclusive core resource on the node.
9. The method of claim 8, wherein in S03, the agent is responsible for acquiring CPU exclusive core resources from the isolated CPU pool, numbering and managing, allocating single or multiple CPU exclusive core resources to one pod, allocating single CPU exclusive core resources to a group of pods, and allocating multiple CPU exclusive core resources to multiple pods, thereby realizing a single-to-single, single-to-multiple, multiple-to-single, and multiple-to-multiple binding relationship.
10. The method for studying a model of exclusive CPU core resource management in a cluster according to claim 9, wherein the binding relationship of single-to-single, single-to-many, many-to-single and many-to-many is as follows:
The single-pair single-binding is to only allocate the CPU exclusive core resource of the current number to the POD corresponding to the current number, namely, allocate the binding cgroup cpuset before the container is started, and endow CPU affinity and monopolization to the Pod on the node, wherein the monopolization is realized by using a cgroup cpuset controller;
the single-to-multiple binding is to bind a plurality of CPU exclusive core resources with PODs with corresponding numbers;
the multi-pair single binding is to bind one CPU exclusive core resource with a plurality of PODs, so that the plurality of PODs share the current CPU exclusive core resource;
the many-to-many method is to bind a plurality of CPU exclusive core resources and a plurality of PODs, realize that a plurality of PODs share a group of CPU exclusive core resources, and write the group of CPU lists into cgroup cpuset of each POD at the same time.
CN202311702996.6A 2023-12-12 2023-12-12 Research method of exclusive CPU core resource management model in cluster Pending CN117931421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311702996.6A CN117931421A (en) 2023-12-12 2023-12-12 Research method of exclusive CPU core resource management model in cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311702996.6A CN117931421A (en) 2023-12-12 2023-12-12 Research method of exclusive CPU core resource management model in cluster

Publications (1)

Publication Number Publication Date
CN117931421A true CN117931421A (en) 2024-04-26

Family

ID=90769293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311702996.6A Pending CN117931421A (en) 2023-12-12 2023-12-12 Research method of exclusive CPU core resource management model in cluster

Country Status (1)

Country Link
CN (1) CN117931421A (en)

Similar Documents

Publication Publication Date Title
CN112199194B (en) Resource scheduling method, device, equipment and storage medium based on container cluster
CN107066319B (en) Multi-dimensional scheduling system for heterogeneous resources
WO2020001320A1 (en) Resource allocation method, device, and apparatus
US7810090B2 (en) Grid compute node software application deployment
EP0273612B1 (en) Multiprocessor memory management method and apparatus
US5093913A (en) Multiprocessor memory management system with the flexible features of a tightly-coupled system in a non-shared memory system
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
CN114741207B (en) GPU resource scheduling method and system based on multi-dimensional combination parallelism
EP0747832A2 (en) Customer information control system and method in a loosely coupled parallel processing environment
US10013264B2 (en) Affinity of virtual processor dispatching
JPH02127757A (en) Execution of dispersion application program for data processing network
JP2015537307A (en) Component-oriented hybrid cloud operating system architecture and communication method thereof
CN112052068A (en) Method and device for binding CPU (central processing unit) of Kubernetes container platform
CN112039963B (en) Processor binding method and device, computer equipment and storage medium
JP2010218151A (en) Virtual computer management mechanism and cpu time allocation control method in virtual computer system
CN117931421A (en) Research method of exclusive CPU core resource management model in cluster
JP2001229058A (en) Data base server processing method
JP4489958B2 (en) Simultaneous processing of event-based systems
CN115063282A (en) GPU resource scheduling method, device, equipment and storage medium
CN112416538B (en) Multi-level architecture and management method of distributed resource management framework
CN111796932A (en) GPU resource scheduling method
CN115168057B (en) Resource scheduling method and device based on k8s cluster
Van Tilborg et al. Distributed task force scheduling in multi-microcomputer networks
WO2024087663A1 (en) Job scheduling method and apparatus, and chip
Su et al. A zero-penalty container-based execution infrastructure for hadoop framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination