CN116360977A - Resource allocation method and device and electronic equipment - Google Patents

Resource allocation method and device and electronic equipment Download PDF

Info

Publication number
CN116360977A
CN116360977A CN202310078341.XA CN202310078341A CN116360977A CN 116360977 A CN116360977 A CN 116360977A CN 202310078341 A CN202310078341 A CN 202310078341A CN 116360977 A CN116360977 A CN 116360977A
Authority
CN
China
Prior art keywords
target
mig
gpu
strategy
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310078341.XA
Other languages
Chinese (zh)
Inventor
王文潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310078341.XA priority Critical patent/CN116360977A/en
Publication of CN116360977A publication Critical patent/CN116360977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention relates to a resource allocation method, a device and electronic equipment, wherein the method comprises the following steps: when the operation event of the MIG of the multi-instance image processor is monitored, analyzing the operation event to obtain instance information; when the configuration action instruction is determined to be used for indicating to execute the first configuration action, selecting a target strategy from a preconfigured MIG strategy set according to the configuration parameter information and the second identification information; determining a node to be operated according to the first identification information; and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU. The process is simple, efficient, flexible and configurable, so that workload of MIG configuration work in the GPU is greatly reduced, and labor cost and time cost are reduced.

Description

Resource allocation method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of cloud computing, in particular to a resource allocation method and device and electronic equipment.
Background
With the vigorous development of cloud computing in recent years, the virtualization technology of hardware is also continuously updated iteratively. For deep learning this scenario, virtualization of the graphics processor (Graphics Processing Unit, GPU for short) becomes particularly important in order to make GPU devices in the cloud computing cluster available for more users.
In the prior art, some manufacturers develop new GPU products, and support a virtualization segmentation function technology in hardware in a native manner to obtain a Multi-instance GPU (MIG for short), so that GPU MIG instances after GPU segmentation can achieve data protection, fault isolation independence and service stability.
In a cloud computing platform, MIG instances are generally managed by using kubernetes (K8 s for short) clusters, and pod scheduling allocation is performed by using MIG instance resources. However, in the MIG instance resource configuration process, because the number of GPUs capable of configuring MIGs on different server nodes in the K8s cluster is inconsistent, the MIG configuration scheme of each GPU may also be inconsistent, if the cluster size is large, the configuration mode is complicated by manual configuration, and too much labor cost and time cost are also occupied.
Disclosure of Invention
The application provides a resource allocation method, a resource allocation device and electronic equipment, so as to solve the technical problems of part or all of the prior art.
In a first aspect, the present application provides a resource allocation method, including:
when an operation event of a MIG of a multi-instance image processor is monitored, analyzing the operation event to obtain instance information, wherein the instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a GPU of a target image processor in the node to be operated and second identification information corresponding to a target strategy to be configured of the target GPU;
When the configuration action instruction is determined to be used for indicating to execute the first configuration action, selecting a target strategy from a preconfigured MIG strategy set according to the configuration parameter information and the second identification information;
determining a node to be operated according to the first identification information;
and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
Optionally, the MIG is a MIG of a custom resource type.
Optionally, the configuration parameter information includes manufacturer information of the target GPU and third identification information of the target GPU;
according to the configuration parameter information, a target driving interface is called to run preset code logic for configuring a target strategy into a target GPU, and the method comprises the following steps:
determining a target driving interface according to manufacturer information of the target GPU;
determining a target GPU in the node to be operated according to the third identification information;
and calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, when determining that the configuration action instruction is used to instruct to perform the first configuration action, selecting, according to the configuration parameter information and the second identification information, a target policy from the preconfigured MIG policy set, including:
According to the third identification information, matching MIG strategy subsets corresponding to the target GPU from the MIG strategy sets;
and selecting a target strategy from the MIG strategy subset according to the second identification information.
Alternatively, the MIG policy set is embodied in the form of a configmap object.
Optionally, the MIG policy set includes: a first data structure and a second data structure;
the first data structure comprises: at least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field;
the second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any count combination field;
wherein the first type field is used to indicate the GPU type; the second type field is used for indicating the sub-resource type corresponding to the first type field, and the count combination field is used for indicating the quantity of each sub-resource type.
Optionally, before the target driving interface is called to run the preset code logic to configure the target policy to the target GPU, the method further includes:
Creating a processing task container for executing the target policy configuration task;
and calling a target driving interface to run preset code logic by using the processing task container, so as to configure the target strategy into the target GPU.
Optionally, the first configuring act includes: creating MIG mode and configuring MIG strategy.
Optionally, when determining that the configuration action instruction is to instruct to perform the second configuration action, the method includes:
and replacing the MIG strategy configured in the target GPU with the target strategy.
Optionally, replacing the configured MIG policy in the target GPU with the target policy includes:
clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU;
and according to the second identification information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, the second configuring act includes: the MIG policy is updated.
Optionally, when determining that the configuration action instruction is to instruct execution of the third configuration action, the method further comprises:
and clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU.
Optionally, the third configuring act includes: the MIG policy that is currently configured is deleted.
Optionally, the instance information further includes status information of the configuration instance, and the method further includes:
screening the GPU which does not complete policy configuration;
when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully, wherein the first GPU is any one of the GPUs which are not configured by the strategy.
Optionally, detecting an operation state of code logic operated in the first GPU when it is determined that the MIG policy in the first GPU is inconsistent with the target policy;
when the running state is that the running is completed, updating the state information to be the configuration failure;
or when the running state is not running, detecting the running state of the code logic running in the first GPU again after a preset time period is set;
and when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
Optionally, listening for an operation event of the MIG includes:
and utilizing the preregistered MIG monitor to monitor the operation event of the MIG in real time.
In a second aspect, the present application provides a resource allocation apparatus, the apparatus comprising:
the monitoring module is used for monitoring operation events of the MIG of the multi-instance image processor;
The system comprises a monitoring module, an analyzing module and a processing module, wherein the monitoring module is used for monitoring operation events of a MIG of a multi-instance image processor, analyzing the operation events and obtaining instance information, wherein the instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a GPU of a target image processor in the node to be operated and second identification information corresponding to a target strategy to be configured of the target GPU;
the processing module is used for determining MIG configuration actions to be executed according to the configuration action instructions;
the selecting module is used for selecting a target strategy from a preset MIG strategy set according to the configuration parameter information and the second identification information when the MIG configuration action to be executed is determined to be the first configuration action;
the processing module is also used for determining the node to be operated according to the first identification information; and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
Optionally, the MIG is a MIG of a custom resource type.
Optionally, the configuration parameter information includes manufacturer information of the target GPU and third identification information of the target GPU;
the processing module is specifically used for determining a target driving interface according to manufacturer information of the target GPU; determining a target GPU in the node to be operated according to the third identification information; and calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, the selecting module is further configured to match, according to the third identification information, a MIG policy subset corresponding to the target GPU from the MIG policy set; and selecting a target strategy from the MIG strategy subset according to the second identification information.
Alternatively, the MIG policy set is embodied in the form of a configmap object.
Optionally, the MIG policy set includes: a first data structure and a second data structure;
the first data structure comprises: at least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field;
the second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any count combination field;
wherein the first type field is used to indicate the GPU type; the second type field is used for indicating the sub-resource type corresponding to the first type field, and the count combination field is used for indicating the quantity of each sub-resource type.
Optionally, the processing module is further configured to create a processing task container for executing the target policy configuration task;
And calling a target driving interface to run preset code logic by using the processing task container, so as to configure the target strategy into the target GPU.
Optionally, the processing module is further configured to replace the MIG policy configured in the target GPU with the target policy when determining that the MIG configuration action to be performed is the second configuration action.
Optionally, the processing module is specifically configured to clear the MIG mode configured in the target GPU and delete the MIG policy configured in the target GPU;
and according to the second identification information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, the processing module is further configured to, when determining that the MIG configuration action to be performed is the third configuration action, clear the MIG mode configured in the target GPU and delete the MIG policy configured in the target GPU.
Optionally, the instance information further includes status information of the configuration instance;
the processing module is also used for screening the GPU which does not complete policy configuration;
when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully, wherein the first GPU is any one of the GPUs which are not configured by the strategy.
Optionally, the processing module is further configured to detect an operation state of code logic operated in the first GPU when it is determined that the MIG policy in the first GPU is inconsistent with the target policy;
when the running state is that the running is completed, updating the state information to be the configuration failure;
or when the running state is not running, detecting the running state of the code logic running in the first GPU again after a preset time period is set;
and when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
Optionally, the monitoring module is specifically configured to monitor, by using a pre-registered MIG monitor, an operation event of the MIG in real time.
In a third aspect, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the resource allocation method according to any one of the embodiments of the first aspect when executing a program stored on a memory.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the resource allocation method as in any of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, when the operation event of the MIG is monitored, the operation event is analyzed to obtain the instance information, and the instance information obtained by analysis at least comprises the first identification information of the operation node, so that the operation node is indicated to execute the MIG configuration operation on the GPU; the configuration instruction is used for indicating what operation is executed on the operation node, configuration parameter information corresponding to the target GPU in the node to be operated, and second identification information corresponding to the target strategy to be configured of the target GPU. When the configuration action instruction is determined to be indicating to execute the first configuration action, selecting a target strategy from the preconfigured MIG strategy set according to the configuration parameter information and the second identification information, determining a node to be operated according to the first identification information, and calling a target driving interface to execute code logic according to the configuration parameter information so as to configure the target strategy to the target GPU. In the whole process, MIG configuration information is configured into MIG examples, then a target strategy is determined according to the content in the example information as introduced above by monitoring the form of operation events of the example change, and then a corresponding scheduling task is generated for calling a target driving interface to execute bottom logic, so that MIG processing of a target GPU is completed. In addition, in the application document, because the MIG strategy configuration set mode is adopted, the MIG configuration strategy set can be freely modified from outside without changing the original logic in the operator component, and the invasion of the original logic of the project due to the change of the service requirement is reduced.
Drawings
Fig. 1 is a schematic flow chart of a resource allocation method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another resource allocation method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a data structure of an MIG policy set provided in an embodiment of the present invention;
FIG. 4 is a flowchart of another resource allocation method according to an embodiment of the present invention;
FIG. 5 is a flowchart of another resource allocation method according to an embodiment of the present invention;
FIG. 6 is an overall simplified block diagram of a resource allocation method flow provided by an embodiment of the present invention;
FIG. 7 is a flowchart of another resource allocation method according to an embodiment of the present invention;
FIG. 8 is a block diagram of an overall simple flow for updating configuration status in a resource configuration method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a resource allocation device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
For the purpose of facilitating an understanding of the embodiments of the present invention, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the invention.
In view of the technical problems mentioned in the background art, the embodiment of the present application provides a resource allocation method, specifically referring to fig. 1, fig. 1 is a schematic flow chart of a resource allocation method provided in the embodiment of the present invention. The method can be applied to kubernetes clusters. And creates a kubernetes operator component in the cluster that manages MIG instances. The MIG configuration information is mapped to MIG instances, which in one specific example are MIG instances of custom resource types (Custom resource definition, crd for short), MIG crd for short. In an alternative example, an operator may pre-register a listener in the kubernetes cluster and utilize the listener to listen for MIG crd instance operation events in real time. The monitor of the operator may be a controller included in the operator. After the operation event is analyzed, corresponding operation logic is executed according to the instance information, and MIG management on the bottom GPU is completed.
Wherein kubernetes, abbreviated as K8s, is an abbreviation in which 8 replaces 8 characters "kubernete". Is a container orchestration engine of Google open source, which supports automated deployment, large scale scalability, and application containerization management. k8s Operator is a controller for a specific application, and the functions of the Kubernetes API can be extended to represent instances of complex applications created, configured and managed by k8s users, which are built based on basic k8s resources and controller concepts, but which cover the knowledge of a specific domain or application for automating the lifecycle of the application it manages.
Some preparation work is first required before the method steps of the embodiments of the present application are performed. The method specifically comprises the following steps: MIG crd is created.
Specifically, node information of an operation to be performed is recorded in MIG crd, MIG configuration scheme of a CPU is specified, and the like. Of course, other information may be included, and the following examples may be referred to specifically:
apiVersion:V1
kind:MIG
metadata:
name:example
spec:
node:node2
operate:CREATE
vendor:NVIDIA
migPlans:
-gpuID:0
migPlan:1
-gpuID:1
migPlan:1
wherein the field value of apiVersion is used to indicate version information of the MIG crd instance and the field value of kine is used to indicate attributes such as MIG in the above example. metadata refers to metadata, including: name (name is example because it is just one example); spec (specification), a node refers to a node to perform an operation, may include a plurality of nodes, and in the above example, only one node 2 is illustrated, and refers to a server node 2 in a cluster; the operation, which indicates that a configuration action instruction, such as CREATE, is to CREATE a MIG schema and configure MIG policies, may of course also include other configuration action instructions, such as UPDATE policy types or DELETE policy types, etc. The vendor is used to instruct the manufacturer of the GPU, to mark the vendor driver interface that is invoked when the MIG is subsequently processed by the bottom layer, for example, in the embodiment of the present application, NVIDIA (GPU vendor) is used, migplanes is used to instruct the MIG policy scheme, where the MIG policy scheme includes a GPU ID, a GPU card number of a policy label of the MIG is configured in a GPU ID attribute, and the migPlan is used to mark the MIG processing on that GPU card, and includes migPlan, and a policy label of the MIG is configured in a migPlan attribute, that is, the second identification information corresponding to the target policy to be mentioned hereinafter.
After the MIG crd instance is created, the following operational steps may be performed. The method comprises the following steps:
step 110, when the operation event of the MIG is monitored, the operation event is parsed, and the instance information is obtained.
Specifically, the operators register MIG crd listeners with kubernetes so that they can listen to any event in the cluster about MIG crd. When the operation event of the MIG is monitored, the operation event is analyzed to acquire the instance information. The instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a target GPU in the node to be operated, and second identification information corresponding to a target strategy to be configured by the target GPU.
The node to be operated refers to a server node like the kubernetes cluster above. The first identification information, which indicates on which node the operation event is to act, configures the action instructions, as described above, to indicate the configuration operation to be performed, e.g. to indicate that MIG mode is created, and to determine which type of MIG policy is to be configured. The configuration parameter information of the target GPU in the node to be operated and the second identification information will be described in sequence below, specifically, see below.
And 120, selecting a target strategy from the preconfigured MIG strategy set according to the configuration parameter information and the second identification information when the configuration action instruction is determined to instruct to execute the first configuration action.
Specifically, the first configuration action is, for example, CREATE MIG mode, and determine MIG policy. Then, a target policy may be selected from the preconfigured MIG policy set based on the configuration parameter information and the second identification information.
The configuration parameter information may include third identification information of the target GPU, where the second identification information is used to indicate a specific target policy. When the target policy is selected from the preconfigured MIG policy set, the method may be implemented by the following method steps, referring specifically to fig. 2, where the method steps include:
step 210, according to the third identification information, matching the MIG policy subset corresponding to the target GPU from the MIG policy set.
And 220, selecting a target strategy from the MIG strategy subset according to the second identification information.
Specifically, the third identification information, such as the GPU ID introduced above, indicates the GPU type, or the GPU name, for example, by the ID of the GPU. The MIG policy subset corresponding to the third identification information of the target GPU is then matched from the MIG policy set. That is, the MIG policy set includes at least one MIG policy subset, each corresponding to one GPU, each MIG policy subset in turn including a plurality of MIG policies. And, second identification information corresponding to each MIG strategy is included. Thus, the target policy may be selected from the MIG policy subset based on the second identification information.
In an alternative example, the MIG policy set may be embodied in the form of a configmap object. Wherein the MIG policy set includes a first data structure and a second data structure:
the first data structure comprises: at least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field;
the second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any count combination field;
wherein the first type field is used for indicating the GPU type (corresponding to the third identification information); the second type field is used for indicating the sub-resource type corresponding to the first type field, and the count combination field is used for indicating the quantity of each sub-resource type. The field value of the sub-resource type and the field value of the count combination field under each GPU type constitute a sub-policy set corresponding to that GPU type.
In particular, the data structure of the MIG policy set may be seen in fig. 3, which includes a first data structure on the left side of fig. 3 and a second data structure on the right side of fig. 3. At least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field are included in the first data structure. The second data structure is the MIG policy content itself composed of the field value of any second type field corresponding to the first type field and the field value of any count combination field. Of course, the first data structure may include version information, attribute information, metadata information, and the like, in addition to the above-described contents. See in particular fig. 3:
The first data structure comprises:
api Version (api Version): v1, kine (attribute): configMap; metadata (metadata): name: migconfig; data (data): the config.json (first type field) configures the GPU type through the key of the config.json in the configmap information. For example, the key values in FIG. 3 may be A100-40G. All MIG instances that the GPU may generate are configured by the strategy attribute field (second type field), e.g., when the GPU type is a100-40G, the types of IMG instances that may be generated may be 1g.5gb,2g.10gb,3g.20gb,4g.20gb, and 7g.40gb, etc., i.e., MIG sub-resource types mentioned above, the combination information in strategy is configured by the planes field (count combination field), e.g., "1" as shown in fig. 3: "70000" to indicate the number of each seed resource type. Finally, according to strategy, plans and the like, the MIG policies are formed, a plurality of policies form a policy set, and a target policy is selected from the MIG policy set. The detailed policy content of 1g.5gb corresponding to "70000" is embodied in the right side of fig. 3, which is an IMG policy. Similarly, the detailed policy content of the combination of 1g.5gb corresponding to "13000", "002000", etc. is also shown on the right side of fig. 3, of course, the right side of fig. 3 is merely illustrated by way of example with 1g.5gb, and other policy subsets are similar and therefore not shown here.
And step 130, determining the node to be operated according to the first identification information.
Specifically, for example, if the first identification information is node1, then the node to be operated is server 1 (node 1) in the kubernetes cluster.
And 140, according to the configuration parameter information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
In an alternative example, the configuration parameter information may include manufacturer information of the target GPU in addition to the third identification information mentioned above.
Then, according to the configuration parameter information, the target driving interface is called to run the preset code logic, so as to configure the target policy into the target GPU, which can be implemented by the following manner, referring specifically to fig. 4, and the method steps include:
step 410, determining the target driving interface according to the manufacturer information of the target GPU.
Specifically, different manufacturer information and corresponding driving interfaces are different, so that the driving interface to be called needs to be determined according to the manufacturer information of the target GPU.
And step 420, determining the target GPU in the node to be operated according to the third identification information.
In an alternative example, the third identification information may be, for example, ID information of the CPU in the MIG crd example, which is described above, but may also be other information, for example, name information, number information, and the like of the target GPU.
Step 430, call the target driver interface to run the preset code logic for configuring the target policy into the target GPU.
Specifically, the method steps of calling the target driver interface to run the preset code logic and configuring the target policy into the target GPU may refer to the current mature technology, which is not described herein in detail.
In an alternative example, when determining that the configuration action instruction is to instruct execution of the second configuration action, the method includes:
and replacing the MIG strategy configured in the target GPU with the target strategy.
Specifically, the second configuration action may be to update the MIG policy. When executing the method step of replacing the MIG policy configured in the target GPU with the target policy, the MIG policy configured in the target GPU may be directly replaced with the target policy.
However, considering that some of the hardware obtained does not support direct replacement by one MIG policy with another MIG policy, the method may further comprise method steps, see in particular fig. 5, comprising:
step 510, the configured MIG mode in the target GPU is cleared and the configured MIG policy in the target GPU is deleted.
And step 520, calling a target driving interface to run preset code logic according to the second identification information, so as to configure the target strategy into the target GPU.
That is, the GPU is restored to an initial state, and then the MIG mode is recreated and the target policy is configured.
Optionally, when determining that the configuration action instruction is to instruct execution of the third configuration action, the method further comprises:
and clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU.
Specifically, the third configuration action is used to instruct deletion of the MIG policy that is currently configured. And then, directly clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU.
The implementation process of the method including the three configuration actions may be specifically shown in fig. 6, fig. 6 illustrates a simple overall implementation architecture diagram, and the specific operation flow is not repeated here.
In an alternative example, the instance information further includes status information of the configuration instance. The Operator is also configured with MIG state maintenance logic to maintain MIG state. Thus, the method further comprises the following method steps, see in particular fig. 7, comprising:
step 710, screen the GPU for outstanding policy configurations.
In step 720, when it is determined that the MIG policy in the first GPU is consistent with the target policy, the state information in the instance information corresponding to the first GPU is updated to be configured successfully.
Specifically, the Operator may start MIG crd instance state maintenance logic after starting the work, periodically screen MIG crd instances in the cluster that are not configured after the MIG operation event is monitored, and check whether MIG policies in the GPU that are not configured with policies are consistent with the target policies one by one. And when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully. The first GPU is any one of the GPUs which are not subjected to strategy configuration.
In step 730, when it is determined that the MIG policy in the first GPU is inconsistent with the target policy, an operating state of code logic operating in the first GPU is detected.
Step 740, when the running state is that the running is completed, updating the state information to be the configuration failure;
or alternatively, the process may be performed,
step 750, detecting the running state of the code logic running in the first GPU again after a preset time period when the running state is not running; and when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
That is, if the running state is already running, and at this time, if the MIG policy in the first GPU is inconsistent with the target policy, it indicates that the configuration fails, and thus the state information needs to be updated to the configuration failure.
Or if the logic code is not run completely at this time, the configuration failure cannot be directly judged, whether the MIG strategy in the first GPU is consistent with the target strategy is further judged after the configuration is waited to be completed, if so, the update state information is successful in configuration, and if not, the update state information is failed in configuration.
The specific logic operation flow may be referred to as fig. 8, and the whole state updating process is simply illustrated in fig. 8, and the specific operation flow is not described herein because it has been described in detail above.
When an operation event of a MIG is monitored, analyzing the operation event to obtain instance information, wherein the instance information obtained by analysis at least comprises first identification information of an operation node, and the first identification information is used for indicating which operation node is used for executing MIG configuration operation on a GPU; the configuration instruction is used for indicating what operation is executed on the operation node, configuration parameter information corresponding to the target GPU in the node to be operated, and second identification information corresponding to the target strategy to be configured of the target GPU. When the configuration action instruction is determined to be indicating to execute the first configuration action, selecting a target strategy from the preconfigured MIG strategy set according to the configuration parameter information and the second identification information, determining a node to be operated according to the first identification information, and calling a target driving interface to execute code logic according to the configuration parameter information so as to configure the target strategy to the target GPU. In the whole process, MIG configuration information is configured into MIG examples, then a target strategy is determined according to the content in the example information as introduced above by monitoring the form of operation events of the example change, and then a corresponding scheduling task is generated for calling a target driving interface to execute bottom logic, so that MIG processing of a target GPU is completed. In addition, in the application document, because the MIG strategy configuration set mode is adopted, the MIG configuration strategy set can be freely modified from outside without changing the original logic in the operator component, and the invasion of the original logic of the project due to the change of the service requirement is reduced.
In the foregoing, several method embodiments for resource allocation provided in the present application, and other embodiments for resource allocation provided in the present application are described below, specifically, see the following.
Fig. 9 is a resource allocation apparatus according to an embodiment of the present invention, where the apparatus includes: a listening module 901, a parsing module 902, a processing module 903, and a selection module 904.
The monitoring module 901 is configured to monitor an operation event of the MIG of the multi-instance image processor;
the analyzing module 902 is configured to analyze the operation event when the monitoring module 901 monitors the operation event of the MIG, and obtain instance information, where the instance information at least includes first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a GPU of a target GPU of the node to be operated, and second identification information corresponding to a target policy to be configured of the target GPU;
a processing module 903, configured to determine an MIG configuration action to be performed according to the configuration action instruction;
a selecting module 904, configured to select, when determining that the MIG configuration action to be performed is a first configuration action, a target policy from the preconfigured MIG policy set according to the configuration parameter information and the second identification information;
The processing module 903 is further configured to determine a node to be operated according to the first identification information; and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
Optionally, the MIG is a MIG of a custom resource type.
Optionally, the configuration parameter information includes manufacturer information of the target GPU and third identification information of the target GPU;
the processing module 903 is specifically configured to determine a target driving interface according to manufacturer information of the target GPU; determining a target GPU in the node to be operated according to the third identification information; and calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, the selecting module 904 is further configured to match, according to the third identification information, a MIG policy subset corresponding to the target GPU from the MIG policy set; and selecting a target strategy from the MIG strategy subset according to the second identification information.
Alternatively, the MIG policy set is embodied in the form of a configmap object.
Optionally, the MIG policy set includes: a first data structure and a second data structure;
the first data structure comprises: at least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field;
The second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any count combination field;
wherein the first type field is used to indicate the GPU type; the second type field is used for indicating the sub-resource type corresponding to the first type field, and the count combination field is used for indicating the quantity of each sub-resource type.
Optionally, the processing module 903 is further configured to create a processing task container for executing the target policy configuration task;
and calling a target driving interface to run preset code logic by using the processing task container, so as to configure the target strategy into the target GPU.
Optionally, the first configuring act includes: creating MIG mode and configuring MIG strategy.
Optionally, the processing module 903 is further configured to replace the MIG policy configured in the target GPU with the target policy when determining that the MIG configuration action to be performed is the second configuration action.
Optionally, the second configuring act includes: the MIG policy is updated.
Optionally, the processing module 903 is specifically configured to clear the MIG mode configured in the target GPU and delete the MIG policy configured in the target GPU;
And according to the second identification information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, the processing module 903 is further configured to clear the MIG mode configured in the target GPU and delete the MIG policy configured in the target GPU when it is determined that the MIG configuration action to be performed is the third configuration action.
Optionally, the third configuring act includes: the MIG policy that is currently configured is deleted.
Optionally, the instance information further includes status information of the configuration instance;
the processing module 903 is further configured to screen GPUs that have not completed policy configuration;
when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully, wherein the first GPU is any one of the GPUs which are not configured by the strategy.
Optionally, the processing module 903 is further configured to detect an operation state of code logic that is operated in the first GPU when it is determined that the MIG policy in the first GPU is inconsistent with the target policy;
when the running state is that the running is completed, updating the state information to be the configuration failure;
or when the running state is not running, detecting the running state of the code logic running in the first GPU again after a preset time period is set;
And when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
Optionally, the monitoring module 901 is specifically configured to monitor, by using a pre-registered MIG monitor, an operation event of the MIG in real time.
The functions performed by each component in the resource allocation device provided in the embodiment of the present invention are described in detail in any of the above method embodiments, so that a detailed description is omitted here.
When an operation event of a MIG is monitored, resolving the operation event to obtain instance information, wherein the instance information obtained by resolving at least comprises first identification information of an operation node, which is used for indicating which operation node is used for executing MIG configuration operation on a GPU; the configuration instruction is used for indicating what operation is executed on the operation node, configuration parameter information corresponding to the target GPU in the node to be operated, and second identification information corresponding to the target strategy to be configured of the target GPU. When the configuration action instruction is determined to be indicating to execute the first configuration action, selecting a target strategy from the preconfigured MIG strategy set according to the configuration parameter information and the second identification information, determining a node to be operated according to the first identification information, and calling a target driving interface to execute code logic according to the configuration parameter information so as to configure the target strategy to the target GPU. In the whole process, MIG configuration information is configured into MIG examples, then a target strategy is determined according to the content in the example information as introduced above by monitoring the form of operation events of the example change, and then a corresponding scheduling task is generated for calling a target driving interface to execute bottom logic, so that MIG processing of a target GPU is completed. In addition, in the application document, because the MIG strategy configuration set mode is adopted, the MIG configuration strategy set can be freely modified from outside without changing the original logic in the operator component, and the invasion of the original logic of the project due to the change of the service requirement is reduced.
As shown in fig. 10, the embodiment of the present application provides an electronic device, which includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 perform communication with each other through the communication bus 114.
A memory 113 for storing a computer program;
in one embodiment of the present application, the processor 111 is configured to implement the resource allocation method provided in any one of the foregoing method embodiments when executing the program stored in the memory 113, where the method includes:
when an operation event of a MIG of a multi-instance image processor is monitored, analyzing the operation event to obtain instance information, wherein the instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a GPU of a target image processor in the node to be operated and second identification information corresponding to a target strategy to be configured of the target GPU;
when the configuration action instruction is determined to be used for indicating to execute the first configuration action, selecting a target strategy from a preconfigured MIG strategy set according to the configuration parameter information and the second identification information;
determining a node to be operated according to the first identification information;
And calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
Optionally, the MIG is a MIG of a custom resource type.
Optionally, the configuration parameter information includes manufacturer information of the target GPU and third identification information of the target GPU;
according to the configuration parameter information, a target driving interface is called to run preset code logic for configuring a target strategy into a target GPU, and the method comprises the following steps:
determining a target driving interface according to manufacturer information of the target GPU;
determining a target GPU in the node to be operated according to the third identification information;
and calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, when determining that the configuration action instruction is used to instruct to perform the first configuration action, selecting, according to the configuration parameter information and the second identification information, a target policy from the preconfigured MIG policy set, including:
according to the third identification information, matching MIG strategy subsets corresponding to the target GPU from the MIG strategy sets;
and selecting a target strategy from the MIG strategy subset according to the second identification information.
Alternatively, the MIG policy set is embodied in the form of a configmap object.
Optionally, the MIG policy set includes: a first data structure and a second data structure;
the first data structure comprises: at least one first type field, at least one second type field corresponding to each first type field, and at least one count combination field corresponding to each first type field;
the second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any count combination field;
wherein the first type field is used to indicate the GPU type; the second type field is used for indicating the sub-resource type corresponding to the first type field, and the count combination field is used for indicating the quantity of each sub-resource type.
Optionally, before the target driving interface is called to run the preset code logic to configure the target policy to the target GPU, the method further includes:
creating a processing task container for executing the target policy configuration task;
and calling a target driving interface to run preset code logic by using the processing task container, so as to configure the target strategy into the target GPU.
Optionally, the first configuring act includes: creating MIG mode and configuring MIG strategy.
Optionally, when determining that the configuration action instruction is to instruct to perform the second configuration action, the method includes:
and replacing the MIG strategy configured in the target GPU with the target strategy.
Optionally, the second configuring act includes: the MIG policy is updated.
Optionally, replacing the configured MIG policy in the target GPU with the target policy includes:
clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU;
and according to the second identification information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
Optionally, when determining that the configuration action instruction is to instruct execution of the third configuration action, the method further comprises:
and clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU.
Optionally, the third configuring act includes: the MIG policy that is currently configured is deleted.
Optionally, the instance information further includes status information of the configuration instance, and the method further includes:
screening the GPU which does not complete policy configuration;
when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully, wherein the first GPU is any one of the GPUs which are not configured by the strategy.
Optionally, detecting an operation state of code logic operated in the first GPU when it is determined that the MIG policy in the first GPU is inconsistent with the target policy;
when the running state is that the running is completed, updating the state information to be the configuration failure;
or when the running state is not running, detecting the running state of the code logic running in the first GPU again after a preset time period is set;
and when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
Optionally, listening for an operation event of the MIG includes:
and utilizing the preregistered MIG monitor to monitor the operation event of the MIG in real time.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the resource allocation method provided by any of the method embodiments described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of embodiments of the present invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A method of resource allocation, the method comprising:
when an operation event of a MIG of a multi-instance image processor is monitored, analyzing the operation event to obtain instance information, wherein the instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a GPU of a target image processor in the node to be operated and second identification information corresponding to a target strategy to be configured of the target GPU;
when the configuration action instruction is determined to be used for indicating to execute a first configuration action, selecting the target strategy from a preconfigured MIG strategy set according to the configuration parameter information and the second identification information;
Determining the node to be operated according to the first identification information;
and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
2. The method according to claim 1, wherein the configuration parameter information includes manufacturer information of the target GPU and third identification information of the target GPU;
and according to the configuration parameter information, calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU, wherein the method comprises the following steps:
determining a target driving interface according to manufacturer information of the target GPU;
determining a target GPU in the node to be operated according to the third identification information;
and calling a target driving interface to run preset code logic for configuring the target strategy into the target GPU.
3. The method of claim 2, wherein when the configuration action instruction is determined to instruct execution of a first configuration action, selecting the target policy from a preconfigured MIG policy set based on the configuration parameter information and the second identification information, comprises:
According to the third identification information, matching MIG strategy subsets corresponding to the target GPU from the MIG strategy set;
and selecting the target strategy from the MIG strategy subset according to the second identification information.
4. The method of claim 3, wherein the MIG policy set comprises: a first data structure and a second data structure;
the first data structure comprises: at least one first type field, at least one second type field corresponding to each of said first type fields, and at least one count combination field corresponding to each of said first type fields;
the second data structure comprises a strategy subset corresponding to each first type field, wherein each strategy in the strategy subset is composed of a field value of any second type field corresponding to the first type field and a field value of any counting combination field;
wherein the first type field is used to indicate a GPU type; the second type field is used to indicate the sub-resource type corresponding to the first type field, and the count combination field is used to indicate the number of each sub-resource type.
5. The method of claim 2, wherein the invoking the target driver interface runs preset code logic to further comprise, prior to configuring the target policy to the target GPU:
creating a processing task container for executing the target policy configuration task;
and calling the target driving interface to run preset code logic by using the processing task container, so as to configure the target strategy into the target GPU.
6. The method of claim 1, wherein when determining that the configuration action instruction is to instruct execution of a second configuration action, the method comprises:
and replacing the MIG strategy configured in the target GPU with the target strategy.
7. The method of claim 6, wherein the replacing the MIG policy configured in the target GPU with the target policy comprises:
clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU;
and according to the second identification information, a target driving interface is called to run preset code logic, so that the target strategy is configured in the target GPU.
8. The method of claim 1, wherein when determining that the configuration action instruction is to instruct execution of a third configuration action, the method further comprises:
And clearing the configured MIG mode in the target GPU, and deleting the configured MIG strategy in the target GPU.
9. The method according to any one of claims 1-8, wherein the instance information further includes status information of a configuration instance, the method further comprising:
screening the GPU which does not complete policy configuration;
when the MIG strategy in the first GPU is determined to be consistent with the target strategy, updating the state information in the instance information corresponding to the first GPU to be configured successfully, wherein the first GPU is any one of the GPUs which are not configured by the strategy.
10. The method of claim 9, wherein when it is determined that the MIG policy in the first GPU is inconsistent with the target policy, the method further comprises:
detecting the running state of code logic running in the first GPU;
when the running state is that the running is completed, updating the state information into configuration failure;
or alternatively, the process may be performed,
when the running state is not running, detecting the running state of the code logic running in the first GPU again after a preset time period is set;
and when the running state is that the running is completed, detecting whether the MIG strategy in the first GPU is consistent with the target strategy or not again.
11. A resource allocation apparatus, the apparatus comprising:
the monitoring module is used for monitoring operation events of the MIG of the multi-instance image processor;
the analyzing module is used for analyzing the operation event to obtain instance information when the monitoring module monitors the operation event of the multi-instance image processor MIG, wherein the instance information at least comprises first identification information of a node to be operated, a configuration action instruction, configuration parameter information corresponding to a target image processor GPU in the node to be operated and second identification information corresponding to a target strategy to be configured of the target GPU;
the processing module is used for determining MIG configuration actions to be executed according to the configuration action instructions;
the selecting module is used for selecting the target strategy from a preset MIG strategy set according to the configuration parameter information and the second identification information when the MIG configuration action to be executed is determined to be a first configuration action;
the processing module is further configured to determine the node to be operated according to the first identification information; and calling a target driving interface to run preset code logic according to the configuration parameter information, so as to configure the target strategy into the target GPU.
12. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the resource allocation method of any one of claims 1 to 10 when executing a program stored on a memory.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the resource allocation method according to any of the claims 1-10.
CN202310078341.XA 2023-01-31 2023-01-31 Resource allocation method and device and electronic equipment Pending CN116360977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310078341.XA CN116360977A (en) 2023-01-31 2023-01-31 Resource allocation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310078341.XA CN116360977A (en) 2023-01-31 2023-01-31 Resource allocation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116360977A true CN116360977A (en) 2023-06-30

Family

ID=86930852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310078341.XA Pending CN116360977A (en) 2023-01-31 2023-01-31 Resource allocation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116360977A (en)

Similar Documents

Publication Publication Date Title
US11121921B2 (en) Dynamic auto-configuration of multi-tenant PaaS components
US11755343B2 (en) Methods, systems and apparatus to trigger a workflow in a cloud computing environment
US20210406079A1 (en) Persistent Non-Homogeneous Worker Pools
US20210111957A1 (en) Methods, systems and apparatus to propagate node configuration changes to services in a distributed environment
US9851989B2 (en) Methods and apparatus to manage virtual machines
CN110413288B (en) Application deployment method, device, server and storage medium
CN112424750A (en) Multi-cluster supply and management method on cloud platform
CN112437915A (en) Method for monitoring multiple clusters and application programs on cloud platform
US8954859B2 (en) Visually analyzing, clustering, transforming and consolidating real and virtual machine images in a computing environment
EP3202085A1 (en) Topology based management of second day operations
WO2016053304A1 (en) Topology based management with compliance policies
CN113687912A (en) Container cluster management method, device and system, electronic equipment and storage medium
EP3128416A1 (en) Sdn application integration, management and control method, system and device
US11528186B2 (en) Automated initialization of bare metal servers
US9256509B1 (en) Computing environment analyzer
US11108638B1 (en) Health monitoring of automatically deployed and managed network pipelines
CN111679888A (en) Deployment method and device of agent container
US11163552B2 (en) Federated framework for container management
CN113900670B (en) Cluster server application deployment system
CN107679691B (en) Working equipment management method and system
CN116360977A (en) Resource allocation method and device and electronic equipment
US20210373868A1 (en) Automated Deployment And Management Of Network Intensive Applications
CN112905306A (en) Multi-cluster container management method and device, electronic equipment and storage medium
US11743188B2 (en) Check-in monitoring for workflows
US20230418729A1 (en) Debugging operator errors in a distributed computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination