CN117608754A - Processing method and device for multi-resource pool application, electronic equipment and storage medium - Google Patents
Processing method and device for multi-resource pool application, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117608754A CN117608754A CN202311639282.5A CN202311639282A CN117608754A CN 117608754 A CN117608754 A CN 117608754A CN 202311639282 A CN202311639282 A CN 202311639282A CN 117608754 A CN117608754 A CN 117608754A
- Authority
- CN
- China
- Prior art keywords
- workload
- resource pool
- resource
- batchworkload
- workloads
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title description 6
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims description 50
- 238000012544 monitoring process Methods 0.000 claims description 29
- 230000008602 contraction Effects 0.000 claims description 12
- 238000012423 maintenance Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 5
- 238000011161 development Methods 0.000 claims description 4
- 238000007726 management method Methods 0.000 description 19
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000002071 nanotube Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The application discloses a processing device, an electronic device and a storage medium for multi-resource pool application, wherein CRD custom resource types are mainly constructed, statement of a plurality of workloads crossing a resource pool is completed in one CRD object, an API Server of a kubernetes cluster is monitored, resource object events of a BatchWorkload resource type in the kubernetes cluster are obtained, after the fact that a BatchWorkload instance is established in the kubernetes cluster is monitored, the monitored general information of the workload of the BatchWorkload and the topology distribution of the resource pool are analyzed, the workload of each resource pool is generated, and the application operation of all workloads of the resource pool is completed. The embodiment of the application improves the efficiency of multi-resource pool application and reduces the cost.
Description
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a processing method and device for multi-resource pool application, electronic equipment and a storage medium.
Background
With the rapid development of cloud protogenesis technology, more and more users migrate their own business to kubernetes clusters for unified arrangement management. However, in some scenarios, the node pools of the user cluster nanotubes all have obvious regional distribution properties, and the same service system needs to be deployed in different regions. Firstly marking computing nodes of the same resource pool as the same label, then respectively creating a workload for each resource pool by a user, selecting the corresponding labels of the resource pool labels by configuring NodeSelects for the workloads of different regions, and when the user submits the workloads of a plurality of resource pools to kubernetes clusters respectively, a management control component of the clusters sequentially dispatches the workloads to corresponding nodes according to the NodeSelects labels in the workloads, thereby realizing batch arrangement and deployment of the same service system in a plurality of resource pools.
When users need to deploy the same set of service system in a plurality of regional distributed resource pools, the existing scheme needs to sequentially create corresponding workloads for each resource pool, on one hand, most of information is the same except for a small amount of differential information such as cost and labels among the workloads of different resource pools, and the existing scheme can greatly reduce the efficiency of user deployment and operation and maintenance and increase the working cost.
Disclosure of Invention
The disclosure provides a processing method, a processing device, electronic equipment and a storage medium for multi-resource pool application. The method mainly aims at improving the efficiency of multi-resource pool application and reducing the cost.
According to a first aspect of the present disclosure, there is provided a processing method for a multi-resource pool application, including:
monitoring an API Server of a kubernetes cluster, and acquiring a resource object event of a BatchWorkload resource type in the kubernetes cluster, wherein the BatchWorkload resource type is a CRD custom resource type, and completing declaration of a plurality of workLoads across resource pools through one CRD object, wherein the workload is general information for abstracting workLoads of a Deployment or a Statefulset type in the kubernetes cluster, and the regiospecific field is used for declaring topological distribution of the workLoads in different resource pools;
After the fact that the building of the BatchWorkload instance in the kubernetes cluster is monitored, the monitored general information of the workload of the BatchWorkload and the topology distribution of the resource pools are analyzed, the workload of each resource pool is generated, and the application operation of all the workload of the resource pool is completed.
Optionally, the analyzing the monitored general information of the workload of the BatchWorkload object and the topology distribution of the resource pools, generating the workload of each resource pool, and completing the application operation of all the workloads of the resource pool includes:
generating a general workload template object of the resource pool according to the monitored configuration in the workloads.depoymentspec or statefulsetSpec in the Batchworkload;
according to the topology distribution of the workload configured by the universal workload template object and the regionSpec field in different resource pools, automatically generating a corresponding workload object for each resource pool;
calling a kubernetes API according to the workload object, and respectively submitting the workload objects with the corresponding quantity of depth types to the kubernetes cluster;
and distributing the application to the working nodes of different resource pools according to the nodeSelecter labels appointed during the work load creation, so that the kubelet component on the corresponding working node monitors that the application is scheduled, and the kubelet component can pull up the container to perform corresponding operation.
Optionally, after the topology distribution of the workload configured according to the generic workload template object and the regionSpec field in different resource pools, automatically generating a corresponding workload object for each resource pool, the method further includes:
and storing the generated workload objects by adopting a key value pair, wherein the key of the key value pair is the name of each workload, and the value is specific information of each workload.
Optionally, the method further comprises:
constructing a CRD custom resource type BatchWorkload resource type in the kubernetes cluster, declaring general information of a group of workLoads in the work Loads field, and declaring topology distribution and copy number of different resource pools in the regionSpec field.
Optionally, the method further comprises:
when the configuration of the workload needs to be maintained, modifying the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object according to the maintenance requirement.
Optionally, the maintaining includes dynamic scaling of the workload copy, and when the configuration of the workload needs to be maintained, modifying, according to a maintenance requirement, a corresponding configuration of a workload field and/or a regionSpec field under the BatchWorkload object includes:
Periodically collecting monitoring data corresponding to the workload on each node of the resource pool;
according to the resource target threshold value configured by the workload and the monitoring data, calculating the expected copy number of the workload corresponding to the workload in each resource pool;
determining whether the workload copy number of each resource pool needs to be dynamically expanded and contracted or not through the configured upper limit and lower limit of the workload copy number of each resource pool and the workload expected copy number corresponding to each resource pool;
when the number of the workload copies of the resource pool is determined to need dynamic expansion and contraction, modifying the corresponding configuration of the region spec field under the BatchWorkload object.
Optionally, the determining whether the workload copy number of each resource pool needs to be dynamically scaled according to the workload copy upper limit and the workload copy lower limit of each resource pool and the workload expected copy number corresponding to each resource pool includes:
if the number of the expected copies of the workload of the resource pool is smaller than or equal to the minimum number of copies of the workload of the resource pool, determining that the number of the copies of the workload of the resource pool does not need to be expanded and contracted;
if the number of the workload expected copies of the resource pool is larger than the minimum number of the workload copies and smaller than the current number of the workload copies, determining to shrink the number of the workload copies of the resource pool;
If the expected copy number of the workload of the resource pool is larger than the current copy number of the workload, comparing the relation between the expected copy number of the workload and the maximum copy number of the workload of the resource pool;
if the number of the expected copies of the resource pool workload does not exceed the maximum number of copies of the resource pool workload, updating the current number of copies of the resource pool workload to the number of the expected copies of the workload;
and if the expected copy number of the workload of the resource pool exceeds the maximum copy number of the workload of the resource pool, adjusting the current copy number of the workload of the resource pool to the maximum copy number of the workload, expanding the copy of the difference value between the maximum copy number of the workload and the expected copy number of the workload to other resource pools with redundant resources, and labeling the copy expanded to other resource pools with the labels of the resource pool.
According to a second aspect of the present disclosure, there is provided a processing apparatus for a multi-resource pool application, comprising:
the monitoring unit is used for monitoring an API Server of a kubernetes cluster, acquiring a resource object event of a BatchWorkload resource type in the kubernetes cluster, wherein the BatchWorkload resource type is a CRD custom resource type, and completing the statement of a plurality of workLoads crossing a resource pool through one CRD object, wherein the workload is general information abstracting workLoads of a development type or a Statefulset type in the kubernetes cluster, and the regionSpec field is used for stating the topological distribution of the workLoads in different resource pools;
And the operation unit is used for analyzing the monitored general information of the workload of the BatchWorkload object and the topology distribution of the resource pools after monitoring that the BatchWorkload instance is created in the kubernetes cluster, generating the workload of each resource pool and finishing the application operation of all the workload of the resource pool.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the preceding first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to the processing method for the multi-resource pool application, an API Server of a kubernetes cluster is monitored, a resource object event of a BatchWorkload resource type in the kubernetes cluster is obtained, the BatchWorkload resource type is a CRD custom resource type, statement of a plurality of workLoads crossing a resource pool is completed through one CRD object, the method comprises a workLoads field and a regionSpec field, the workLoads is general information for abstracting workLoads of a depth type or a Statefulset type in the kubernetes cluster, and the regionSpec field is used for statement of topological distribution of workLoads in different resource pools; after the fact that the building of the BatchWorkload instance in the kubernetes cluster is monitored, the monitored general information of the workload of the BatchWorkload and the topology distribution of the resource pools are analyzed, the workload of each resource pool is generated, and the application operation of all the workload of the resource pool is completed. Compared with the prior art, the embodiment of the disclosure performs higher-level abstraction on the existing workload in the kubernetes cluster, constructs a custom CRD resource structure, provides a method for declaring a plurality of cross-regional workLoads by using one CRD resource, realizes unified arrangement of a group of workLoads crossing a plurality of resource pools, and constructs a CRD configuration information template which consists of two parts of workLoads and regionSpec and is respectively used for describing general information in the existing workload of the cluster and declaring the distribution topology of a group of workLoads in different resource pools. By operating a certain CRD resource in the cluster, the general information of the workload of the CRD object and the topology of the resource pool are automatically analyzed, the workload of each resource pool is generated, batch application operation of all the workload of the resource pool is completed, one-key deployment and operation and maintenance of cross-region application are realized, the working efficiency is improved, and the cost is reduced.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of an overall architecture of a system for processing a multi-resource pool application according to an embodiment of the present disclosure;
fig. 2 is a schematic process flow diagram of a multi-resource pool application provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an automatic capacity expansion and contraction flow of a cross-resource pool in a multi-resource pool application process according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a processing device for a multi-resource pool application provided in an embodiment of the present disclosure;
FIG. 5 is a block diagram of another processing device for a multi-resource pool application provided by an embodiment of the present disclosure;
FIG. 6 is a block diagram of a processing device for another multi-resource pool application provided by an embodiment of the present disclosure;
FIG. 7 is a block diagram of another processing device for a multi-resource pool application provided by an embodiment of the present disclosure;
fig. 8 is a schematic block diagram of an example electronic device provided by an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a process, an apparatus, an electronic device, and a storage medium of a multi-resource pool application of an embodiment of the present disclosure with reference to the accompanying drawings.
To solve the above-mentioned problems, the present application provides a method for processing multiple resource pool applications. By constructing a custom CRD resource type in the kubernetes cluster, higher level abstraction of multiple workloads across resource pools is achieved. Users can make orchestration declarations of multiple groups of workloads across a resource pool, with only one CRD object. And a CRD resource management control component is extended in the kubernetes cluster and used for monitoring CRD resource objects in the cluster, and the CRD resource management control component can automatically analyze the monitored CRD objects and uniformly maintain the lifecycle management and issuing of the group of cross-resource-pool workloads. The architecture of the processing of such a multi-resource pool application is shown in fig. 1:
A customized CRD resource type BatchWorkload is constructed in the kubernetes cluster, the existing workload is abstracted, and a plurality of groups of cross-region and multi-copy workloads are described in a BatchWorkload object. The constructed BatchWorkload object structure is as follows, which mainly contains two parts, namely, a workLoads and a regionSpec. The workLoads field is general information abstracted from workload of the existing Deployment or Statefulset type of kubernetes cluster, and comprises container mirror images, volumes, restarting strategies and the like of the group of services; regionSpec is used to declare the topology of the distribution of the set of workloads across the different resource pools and to support differentiated configurations of the workload copies of the different resource pool traffic.
In the embodiment of the disclosure, a CRD custom resource type is constructed in the kubernetes cluster, a group of general information of the workload is declared in the workLoads field, and topology distribution and copy number of different resource pools are declared in the regionSpec field.
When a user needs to deploy a set of business systems in batches across resource pools, the general information of the group of workLoads can be declared in the workLoads field, the topology distribution of different resource pools and the number of workload copies are declared in the regionSpec field, and the user can manage the group of workLoads by only maintaining the one BatchWorkload resource object without maintaining the workload of each resource pool respectively.
After kubernetes cluster declares the above constructed CRD resource structure of the BatchWorkload type, the user can write a specific BatchWorkload object based on the CRD resource structure, describing a set of workload container specifications, resource pool distribution and corresponding copy number of the cross-resource pool service. And (3) lifting the one-step BatchWorkload object to the API Server of the cluster through a kubectl tool or calling a k8s API interface. The extended CRD control management module monitors the BatchWorkload resource and realizes unified management of the life cycle of the workload in the BatchWorkload object.
Specific unified management may be implemented with reference to the method shown in fig. 2, and fig. 2 is a schematic flow chart of a process of a multi-resource pool application provided in an embodiment of the disclosure. As shown in fig. 2, the method comprises the steps of:
step 101, monitoring an API Server of a kubernetes cluster, and obtaining a resource object event of a batch workload resource type in the kubernetes cluster, where the batch workload resource type is a CRD custom resource type, and completing declaration of multiple workLoads across resource pools by using one CRD object, where the workLoads are general information for abstracting workLoads of a development or Statefulset type in the kubernetes cluster, and the regiospec field is used for declaring topology distribution of workLoads in different resource pools.
And 102, after the fact that the batch workload instance is built in the kubernetes cluster is monitored, analyzing the monitored general information of the workload of the batch workload and the topology distribution of the resource pools, generating the workload of each resource pool, and completing the application operation of all the workload of the resource pool.
Further, after it is monitored that a batch workload instance is created in the kubernetes cluster, the monitored general information of the workload of the batch workload and the topology distribution of the resource pools are analyzed, the workload of each resource pool is generated, and when the application operation of all the workloads of the resource pools is completed, the method can be implemented, but is not limited to, the following method includes:
generating a general workload template object of the resource pool according to the monitored configuration in the workloads.depoymentspec or statefulsetSpec in the Batchworkload;
2. according to the topology distribution of the workload configured by the universal workload template object and the regionSpec field in different resource pools, automatically generating a corresponding workload object for each resource pool;
3. calling a kubernetes API according to the workload object, and respectively submitting the workload objects with the corresponding quantity of depth types to the kubernetes cluster;
4. And distributing the application to the working nodes of different resource pools according to the nodeSelecter labels appointed during the work load creation, so that the kubelet component on the corresponding working node monitors that the application is scheduled, and the kubelet component can pull up the container to perform corresponding operation.
In the method as described above, under the architecture of fig. 1, the CRD resource management control module is shown in fig. 1, and mainly includes a CRD resource monitoring module, a CRD resource analyzing module, a workload management module, and a storage module. The specific implementation of each module when implementing each method is as follows, including:
1) After the CRD control management module is started, the resource monitoring module starts an API Server of a List-Watch monitoring cluster to acquire a resource object event of a BatchWorkload type in the cluster, including creation, deletion and modification events;
2) When the fact that a BatchWorkload instance is created in a cluster is monitored, a CRD resource analysis module firstly generates a general work load template object according to configuration in a workflow carriers.depoymentspec or statefulsetSpec in the monitored object;
3) According to the template object generated in the last step, the resource analysis module further automatically generates a complete workload object for each resource pool according to the topology distribution of the workload configured by the regionSpec field in different resource pools, and the generated workload adopts the name of each resource pool in the field name suffix uniquely identified in the cluster. In the batch workload sample in step 1, the CRD control management module generates two reployment resource objects of app-suzhou and app-guangzhou in the cluster, wherein the number of copies of app-suzhou is 2, the number of copies of app-guangzhou is 3, and the nodeSelecter is the name of the corresponding resource pool.
4) The resource analysis module informs the workload module to create a real workload, and the workload management module calls kubernetes API to respectively submit and create two workload objects of app-suzhou and app-guangzhou types to the cluster.
5) All working nodes in the cluster are marked with labels of the affiliated resource pools when being accessed, and the cluster scheduling component distributes the application to the working nodes of different resource pools according to the nodeSelecter labels appointed when the CRD control management module creates the workload.
6) Once the kubelet component on the corresponding work node has heard that the application was scheduled, the kubelet component immediately pulls up the container.
In some embodiments of the present disclosure, after automatically generating a corresponding workload object for each resource pool according to the topology distribution of the workload configured by the generic workload template object and the regionSpec field in different resource pools, the method further comprises:
and storing the generated workload objects by adopting a key value pair, wherein the key of the key value pair is the name of each workload, and the value is specific information of each workload.
Based on the description, the resource parsing module stores the generated workload objects in an internal storage module, and the storage module stores the generated workload objects by using key value pairs, wherein the key is name of each workload, and the value is specific information of each workload. Since the name must be unique when creating the BatchWorkload object, and the name of the workload generated by the BatchWorkload object is appended with the resource pool name as a suffix on the basis of the BatchWorkload name, the names of all generated workloads are also unique.
Further, in the embodiment of the present disclosure, when the configuration of the workload needs to be maintained, the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object is modified according to the maintenance requirement. Specifically, when the configuration of the workload needs to be maintained, such as upgrading the image/expanding capacity, the configuration corresponding to the workflow/regiospec under the corresponding BatchWorkload object only needs to be modified by the proposal, so that the BatchWorkload modification event can be triggered.
The CRD control management module pulls all workloads of all the BatchWorkload stored in the internal storage according to the name of the BatchWorkload, and updates the new configuration to the workloads under each node pool through the API Server. A user does not need to maintain a large amount of Deployments or statefull workloads, and the CRD control management module can automatically complete one-key upgrading configuration and expansion capacity of the workload of the multi-resource pool by only maintaining one CRD object.
Furthermore, the embodiment supports the automatic dynamic expansion and contraction of service copies among the resource pools according to the actual service flow of different resource pools. The CRD declaration template supports policy configuration of dynamic expansion and contraction capacity, and can specify a monitored resource type resource name, a maximum copy number maxReplics, a minimum copy number minReplics and a target threshold target for triggering automatic expansion and contraction capacity. The user-defined CRD resource type BatchWorkload supports configuration of the region spec. AutoScale field to state that a group of workload across resource pools needs to dynamically expand and contract copies of the workload of the service among the resource pools automatically according to the monitored service traffic, CPU, memory and other resource usage conditions specified by the AutoScale. Resource name, as shown in FIG. 3.
The maintaining includes dynamic expansion and contraction of the workload copy, and when the configuration of the workload needs to be maintained and the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object is modified according to the maintenance requirement, the method may be implemented by, but is not limited to, the following method, where the method includes:
1. periodically collecting monitoring data corresponding to the workload on each node of the resource pool;
2. according to the resource target threshold value configured by the workload and the monitoring data, calculating the expected copy number of the workload corresponding to the workload in each resource pool;
3. determining whether the workload copy number of each resource pool needs to be dynamically expanded and contracted or not through the configured upper limit and lower limit of the workload copy number of each resource pool and the workload expected copy number corresponding to each resource pool;
4. when the number of the workload copies of the resource pool is determined to need dynamic expansion and contraction, modifying the corresponding configuration of the region spec field under the BatchWorkload object.
Further, the determining whether the workload copy number of each resource pool needs to be dynamically scaled according to the workload copy upper limit and the workload copy lower limit of each resource pool and the workload expected copy number corresponding to each resource pool may be implemented by, but not limited to, the following methods:
1) If the number of the expected copies of the workload of the resource pool is smaller than or equal to the minimum number of copies of the workload of the resource pool, determining that the number of the copies of the workload of the resource pool does not need to be expanded and contracted;
2) If the number of the workload expected copies of the resource pool is larger than the minimum number of the workload copies and smaller than the current number of the workload copies, determining to shrink the number of the workload copies of the resource pool;
3) If the expected copy number of the workload of the resource pool is larger than the current copy number of the workload, comparing the relation between the expected copy number of the workload and the maximum copy number of the workload of the resource pool;
4) If the number of the expected copies of the resource pool workload does not exceed the maximum number of copies of the resource pool workload, updating the current number of copies of the resource pool workload to the number of the expected copies of the workload;
5) And if the expected copy number of the workload of the resource pool exceeds the maximum copy number of the workload of the resource pool, adjusting the current copy number of the workload of the resource pool to the maximum copy number of the workload, expanding the copy of the difference value between the maximum copy number of the workload and the expected copy number of the workload to other resource pools with redundant resources, and labeling the copy expanded to other resource pools with the labels of the resource pool.
According to the method, the meta server component of the kubernetes cluster periodically collects monitoring data corresponding to the workload on each node of the resource pool. And the CRD resource monitoring module acquires the monitoring data through the API Server.
The CRD resource monitoring module calculates the expected copy number of the workload corresponding to each resource pool according to the resource target threshold target configured by the workload:
replicas expect =∑resource monitor /target
wherein, replicas expect Is the expected copy number of the current resource pool, sigma resource monitor Is the summation of the corresponding resource monitoring indexes of all copies of the workload of the current resource pool, and target is the monitoring resource target threshold configured in the CRD.
The CRD ensures that the number of copies of the workload of each resource pool meets the basic working requirements by supporting configuration of upper and lower copy limits of each resource pool. Classifying whether the current workload needs dynamic expansion and contraction in each resource pool according to configuration:
1) The expansion and contraction capacity is not needed:
the expected copy is less than the minimum number of copies expect <minReplicas; the expected value is equal to the current number of copies expect =replicas current ;
2) Shrinking volume:
minReplics with desired copy between the smallest and the current copy<replicas expect <replicas current ;
3) Expansion:
the desired copy is larger than the current copy replicas expect >replicas current 。
Wherein, the capacity shrinking operation dynamically updates the copy number of the workload of the corresponding resource pool by the CRD resource management control module and modifies the copy number into the calculated expected copy replicas expect 。
And the capacity expansion operation needs to be further classified:
when replicas current <replicas expect maxReplicas is less than or equal to: in the same capacity reduction operation, because the expected copy number does not exceed the set maximum copy number, the CRD resource management control module directly updates the copy number of the corresponding resource pool workload to the expected copy number expect 。
When replicas expect >maxReplicas: the workload of different resource pools may face random traffic peaks, so that the expected copy number of the current resource pool exceeds a set maximum value, and on the other hand, other resource pools may still have idle resources, so that the exceeding expected copy is dynamically expanded to other resource pools under the scene, the resource utilization rate is improved, and meanwhile, the problem that the current resource pool cannot bear more copies in the traffic peak period is solved.
Further, dynamically expanding the excess desired copy to other resource pools may be accomplished by, but is not limited to, a method comprising:
the first step:
calculating difference value replicas between current copy and maximum node copy delta =replicas expect -maxReplicas;
And a second step of: the copy of the current resource pool workload is adjusted to the maximum value maxReplics;
and a third step of: traversing other resource pools, and calculating the redundant copy number of the workload in the other resource pools: replicas remain =maxReplicas-replicas current And (3) arranging in a descending order. The redundant calculated by step1 delta The number of copies is in accordance with the replicas remain And sequentially expanding the resource pools corresponding to the descending queues. And expanding the copy which cannot be borne by the current resource pool to other resource pools with redundant resources.
Fourth step: labeling labels of the resource pool for copies extended to other resource pools. Services such as service and the like corresponding to the workload of the subsequent current resource pool can be accessed by the label to extend to the working copies of other resource pools.
It should be noted that, in the embodiments of the present disclosure, a plurality of steps may be included, and these steps are numbered for convenience of description, but these numbers are not limitations on the execution time slots and execution orders between the steps; the steps may be performed in any order, and embodiments of the present disclosure are not limited in this regard.
According to the embodiment of the disclosure, an API Server of a kubernetes cluster is monitored, a resource object event of a BatchWorkload resource type in the kubernetes cluster is obtained, the BatchWorkload resource type is a CRD custom resource type, and a plurality of workLoads across a resource pool are declared through one CRD object, wherein the workLoads comprise a workLoads field and a regionSpec field, and the workLoads are general information for abstracting workLoads of a depoyment or Statefulset type in the kubernetes cluster, and the regionSpec field is used for declaring topological distribution of the workLoads in different resource pools; after the fact that the building of the BatchWorkload instance in the kubernetes cluster is monitored, the monitored general information of the workload of the BatchWorkload and the topology distribution of the resource pools are analyzed, the workload of each resource pool is generated, and the application operation of all the workload of the resource pool is completed. Compared with the prior art, the embodiment of the disclosure performs higher-level abstraction on the existing workload in the kubernetes cluster, constructs a custom CRD resource structure, provides a method for declaring a plurality of cross-regional workLoads by using one CRD resource, realizes unified arrangement of a group of workLoads crossing a plurality of resource pools, and constructs a CRD configuration information template which consists of two parts of workLoads and regionSpec and is respectively used for describing general information in the existing workload of the cluster and declaring the distribution topology of a group of workLoads in different resource pools. By operating a certain CRD resource in the cluster, the general information of the workload of the CRD object and the topology of the resource pool are automatically analyzed, the workload of each resource pool is generated, batch application operation of all the workload of the resource pool is completed, one-key deployment and operation and maintenance of cross-region application are realized, the working efficiency is improved, and the cost is reduced.
The invention also provides a processing device for the multi-resource pool application. Since the device embodiment of the present invention corresponds to the above-mentioned method embodiment, details not disclosed in the device embodiment may refer to the above-mentioned method embodiment, and details are not described in detail in the present invention.
Fig. 4 is a processing apparatus for multi-resource pool application provided in an embodiment of the present disclosure, where the apparatus includes:
a monitoring unit 201, configured to monitor an API Server of a kubernetes cluster, obtain a resource object event of a batch workload resource type in the kubernetes cluster, where the batch workload resource type is a CRD custom resource type, and complete a declaration of multiple workLoads across resource pools by using one CRD object, where the workload is general information abstracting workLoads of a detail or a Statefulset type in the kubernetes cluster, and the regionSpec field is used to declare topology distribution of workLoads in different resource pools;
and the operation unit 202 is configured to, after monitoring that a batch workload instance is created in the kubernetes cluster, parse the monitored general information of the workload of the batch workload object and the topology distribution of the resource pools, generate the workload of each resource pool, and complete the application operation of all the workloads of the resource pool.
Further, in a possible implementation manner of this embodiment, the operation unit 202 includes:
the analyzing module is configured to analyze the monitored workload general information of the BatchWorkload object and the topology distribution of the resource pools, generate a workload of each resource pool, and complete application operations of all the workloads of the resource pools, and includes:
the first generation module is used for generating a general work load template object of the resource pool according to the monitored configuration in the work loads, depoymentspec or statefulsetSpec in the BatchWorkload;
the second generating module is used for automatically generating a corresponding workload object for each resource pool according to the topology distribution of the workload configured by the general workload template object and the regionSpec field in different resource pools;
the first sending module is used for calling a kubernetes API according to the workload object, and respectively submitting the workload objects with the corresponding number of depth types to the kubernetes cluster;
and the second sending module is used for distributing the application to the working nodes of different resource pools according to the nodeSelecter labels appointed when the workload is created, so that the kubelet component on the corresponding working node monitors that the application is scheduled, and the kubelet component can pull up the container to perform corresponding operation.
Further, as shown in fig. 5, the processing device of the multi-resource pool application includes a storage unit 203.
The storage unit 203 is configured to store the workload objects to be generated by using a key pair after automatically generating a corresponding workload object for each resource pool according to the topology distribution of the workload configured by the generic workload template object and the regionSpec field in different resource pools, where a key of the key pair is name of each workload, and a value is specific information of each workload.
Further, as shown in fig. 6, the processing apparatus of the multi-resource pool application further includes a construction unit 204:
the construction unit 204 is configured to construct a CRD custom resource type, a batch workload resource type, declare general information of a group of workLoads in the workload fields, and declare topology distribution and copy number of different resource pools in the regionSpec fields in the kubernetes cluster.
Further, as shown in fig. 7, the processing apparatus of the multi-resource pool application further includes an updating unit 205:
the updating unit 205 is configured to modify, when the configuration of the workload needs to be maintained, a corresponding configuration of a workLoads field and/or a regionSpec field under the BatchWorkload object according to a maintenance requirement.
The maintaining includes dynamic expansion and contraction of the workload copy, and when the configuration of the workload needs to be maintained, modifying the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object according to the maintenance requirement includes:
periodically collecting monitoring data corresponding to the workload on each node of the resource pool;
according to the resource target threshold value configured by the workload and the monitoring data, calculating the expected copy number of the workload corresponding to the workload in each resource pool;
determining whether the workload copy number of each resource pool needs to be dynamically expanded and contracted or not through the configured upper limit and lower limit of the workload copy number of each resource pool and the workload expected copy number corresponding to each resource pool;
when the number of the workload copies of the resource pool is determined to need dynamic expansion and contraction, modifying the corresponding configuration of the region spec field under the BatchWorkload object.
Determining whether the number of the workload copies of each resource pool needs to be dynamically scaled according to the upper and lower limits of the workload copies of each resource pool and the number of the workload expected copies corresponding to each resource pool comprises:
If the number of the expected copies of the workload of the resource pool is smaller than or equal to the minimum number of copies of the workload of the resource pool, determining that the number of the copies of the workload of the resource pool does not need to be expanded and contracted;
if the number of the workload expected copies of the resource pool is larger than the minimum number of the workload copies and smaller than the current number of the workload copies, determining to shrink the number of the workload copies of the resource pool;
if the expected copy number of the workload of the resource pool is larger than the current copy number of the workload, comparing the relation between the expected copy number of the workload and the maximum copy number of the workload of the resource pool;
if the number of the expected copies of the resource pool workload does not exceed the maximum number of copies of the resource pool workload, updating the current number of copies of the resource pool workload to the number of the expected copies of the workload;
and if the expected copy number of the workload of the resource pool exceeds the maximum copy number of the workload of the resource pool, adjusting the current copy number of the workload of the resource pool to the maximum copy number of the workload, expanding the copy of the difference value between the maximum copy number of the workload and the expected copy number of the workload to other resource pools with redundant resources, and labeling the copy expanded to other resource pools with the labels of the resource pool.
The foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and the principle is the same, and this embodiment is not limited thereto.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 300 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 300 includes a computing unit 301 that can perform various appropriate actions and processes according to a computer program stored in a ROM (Read-Only Memory) 302 or a computer program loaded from a storage unit 308 into a RAM (Random Access Memory ) 303. In the RAM 303, various programs and data required for the operation of the device 300 may also be stored. The computing unit 301, the ROM 302, and the RAM 303 are connected to each other by a bus 304. An I/O (Input/Output) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, etc.; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, an optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 301 include, but are not limited to, a CPU (Central Processing Unit ), a GPU (Graphic Processing Units, graphics processing unit), various dedicated AI (Artificial Intelligence ) computing chips, various computing units running machine learning model algorithms, a DSP (Digital Signal Processor ), and any suitable processor, controller, microcontroller, etc. The computing unit 301 performs the various methods and processes described above, such as the processing of a multi-resource pool application. For example, in some embodiments, the processing of the multi-resource pool application may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into RAM 303 and executed by computing unit 301, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the computing unit 301 may be configured to perform the processing of the aforementioned multi-resource pool application in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit System, FPGA (Field Programmable Gate Array ), ASIC (Application-Specific Integrated Circuit, application-specific integrated circuit), ASSP (Application Specific Standard Product, special-purpose standard product), SOC (System On Chip ), CPLD (Complex Programmable Logic Device, complex programmable logic device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, RAM, ROM, EPROM (Electrically Programmable Read-Only-Memory, erasable programmable read-Only Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., CRT (Cathode-Ray Tube) or LCD (Liquid Crystal Display ) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network ), WAN (Wide Area Network, wide area network), internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be noted that, artificial intelligence is a subject of studying a certain thought process and intelligent behavior (such as learning, reasoning, thinking, planning, etc.) of a computer to simulate a person, and has a technology at both hardware and software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
The various numbers of first, second, etc. referred to in this disclosure are merely for ease of description and are not intended to limit the scope of embodiments of this disclosure, nor to indicate sequencing.
At least one of the present disclosure may also be described as one or more, a plurality may be two, three, four or more, and the present disclosure is not limited. In the embodiment of the disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the technical features described by "first", "second", "third", "a", "B", "C", and "D" are not in sequence or in order of magnitude.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (10)
1. A method for processing a multi-resource pool application, comprising:
monitoring an API Server of a kubernetes cluster, and acquiring a resource object event of a BatchWorkload resource type in the kubernetes cluster, wherein the BatchWorkload resource type is a CRD custom resource type, and completing declaration of a plurality of workLoads across resource pools through one CRD object, wherein the workload is general information for abstracting workLoads of a Deployment or a Statefulset type in the kubernetes cluster, and the regiospecific field is used for declaring topological distribution of the workLoads in different resource pools;
After the fact that the building of the BatchWorkload instance in the kubernetes cluster is monitored, the monitored general information of the workload of the BatchWorkload and the topology distribution of the resource pools are analyzed, the workload of each resource pool is generated, and the application operation of all the workload of the resource pool is completed.
2. The method of claim 1, wherein the parsing the monitored workload general information of the batch workload object and the topology distribution of the resource pools, generating a workload for each resource pool, and completing an application operation of all the workloads of the resource pool, comprises:
generating a general workload template object of the resource pool according to the monitored configuration in the workloads.depoymentspec or statefulsetSpec in the Batchworkload;
according to the topology distribution of the workload configured by the universal workload template object and the regionSpec field in different resource pools, automatically generating a corresponding workload object for each resource pool;
calling a kubernetes API according to the workload object, and respectively submitting the workload objects with the corresponding quantity of depth types to the kubernetes cluster;
and distributing the application to the working nodes of different resource pools according to the nodeSelecter labels appointed during the work load creation, so that the kubelet component on the corresponding working node monitors that the application is scheduled, and the kubelet component can pull up the container to perform corresponding operation.
3. The method of claim 2, further comprising, after automatically generating a corresponding workload object for each resource pool based on the topology distribution of the workload configured by the generic workload template object and the regionSpec field among different resource pools:
and storing the generated workload objects by adopting a key value pair, wherein the key of the key value pair is the name of each workload, and the value is specific information of each workload.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
constructing a CRD custom resource type BatchWorkload resource type in the kubernetes cluster, declaring general information of a group of workLoads in the work Loads field, and declaring topology distribution and copy number of different resource pools in the regionSpec field.
5. The method according to any one of claims 4, further comprising:
when the configuration of the workload needs to be maintained, modifying the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object according to the maintenance requirement.
6. The method of claim 5, wherein the maintaining comprises dynamic scaling of the workload copy, and when the maintaining of the configuration of the workload is required, modifying the corresponding configuration of the workLoads field and/or the regionSpec field under the BatchWorkload object according to the maintenance requirement comprises:
periodically collecting monitoring data corresponding to the workload on each node of the resource pool;
according to the resource target threshold value configured by the workload and the monitoring data, calculating the expected copy number of the workload corresponding to the workload in each resource pool;
determining whether the workload copy number of each resource pool needs to be dynamically expanded and contracted or not through the configured upper limit and lower limit of the workload copy number of each resource pool and the workload expected copy number corresponding to each resource pool;
when the number of the workload copies of the resource pool is determined to need dynamic expansion and contraction, modifying the corresponding configuration of the region spec field under the BatchWorkload object.
7. The method of claim 6, wherein determining whether the number of workload copies per resource pool requires dynamic scaling by the upper and lower workload copy limits per resource pool and the corresponding number of workload desired copies per resource pool comprises:
If the number of the expected copies of the workload of the resource pool is smaller than or equal to the minimum number of copies of the workload of the resource pool, determining that the number of the copies of the workload of the resource pool does not need to be expanded and contracted;
if the number of the workload expected copies of the resource pool is larger than the minimum number of the workload copies and smaller than the current number of the workload copies, determining to shrink the number of the workload copies of the resource pool;
if the expected copy number of the workload of the resource pool is larger than the current copy number of the workload, comparing the relation between the expected copy number of the workload and the maximum copy number of the workload of the resource pool;
if the number of the expected copies of the resource pool workload does not exceed the maximum number of copies of the resource pool workload, updating the current number of copies of the resource pool workload to the number of the expected copies of the workload;
and if the expected copy number of the workload of the resource pool exceeds the maximum copy number of the workload of the resource pool, adjusting the current copy number of the workload of the resource pool to the maximum copy number of the workload, expanding the copy of the difference value between the maximum copy number of the workload and the expected copy number of the workload to other resource pools with redundant resources, and labeling the copy expanded to other resource pools with the labels of the resource pool.
8. A processing apparatus for a multi-resource pool application, comprising:
the monitoring unit is used for monitoring an API Server of a kubernetes cluster, acquiring a resource object event of a BatchWorkload resource type in the kubernetes cluster, wherein the BatchWorkload resource type is a CRD custom resource type, and completing the statement of a plurality of workLoads crossing a resource pool through one CRD object, wherein the workload is general information abstracting workLoads of a development type or a Statefulset type in the kubernetes cluster, and the regionSpec field is used for stating the topological distribution of the workLoads in different resource pools;
and the operation unit is used for analyzing the monitored general information of the workload of the BatchWorkload object and the topology distribution of the resource pools after monitoring that the BatchWorkload instance is created in the kubernetes cluster, generating the workload of each resource pool and finishing the application operation of all the workload of the resource pool.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311639282.5A CN117608754A (en) | 2023-12-01 | 2023-12-01 | Processing method and device for multi-resource pool application, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311639282.5A CN117608754A (en) | 2023-12-01 | 2023-12-01 | Processing method and device for multi-resource pool application, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117608754A true CN117608754A (en) | 2024-02-27 |
Family
ID=89953138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311639282.5A Pending CN117608754A (en) | 2023-12-01 | 2023-12-01 | Processing method and device for multi-resource pool application, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117608754A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118394357A (en) * | 2024-06-28 | 2024-07-26 | 北京火山引擎科技有限公司 | Application deployment method, application access method, device, equipment and storage medium |
-
2023
- 2023-12-01 CN CN202311639282.5A patent/CN117608754A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118394357A (en) * | 2024-06-28 | 2024-07-26 | 北京火山引擎科技有限公司 | Application deployment method, application access method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113377540B (en) | Cluster resource scheduling method and device, electronic equipment and storage medium | |
Han et al. | Tailored learning-based scheduling for kubernetes-oriented edge-cloud system | |
Li et al. | A scientific workflow management system architecture and its scheduling based on cloud service platform for manufacturing big data analytics | |
CN109753356A (en) | A kind of container resource regulating method, device and computer readable storage medium | |
CN113168569A (en) | Decentralized distributed deep learning | |
JP2014532247A (en) | Discoverable identification and migration of easily cloudable applications | |
US20200034196A1 (en) | Optimizing simultaneous startup or modification of inter-dependent machines with specified priorities | |
CN104298550A (en) | Hadoop-oriented dynamic scheduling method | |
Cheong et al. | SCARL: Attentive reinforcement learning-based scheduling in a multi-resource heterogeneous cluster | |
Kanwal et al. | Multiphase fault tolerance genetic algorithm for vm and task scheduling in datacenter | |
CN117608754A (en) | Processing method and device for multi-resource pool application, electronic equipment and storage medium | |
CN114862656A (en) | Method for acquiring training cost of distributed deep learning model based on multiple GPUs | |
CN114820279B (en) | Distributed deep learning method and device based on multiple GPUs and electronic equipment | |
KR20210156243A (en) | Training methods of deep-running frameworks, devices and storage media | |
CN112527509A (en) | Resource allocation method and device, electronic equipment and storage medium | |
CN118364918B (en) | Reasoning method, device, equipment and storage medium of large language model | |
CN115202847A (en) | Task scheduling method and device | |
CN117435306A (en) | Cluster container expansion and contraction method, device, equipment and storage medium | |
Singh et al. | To offload or not? an analysis of big data offloading strategies from edge to cloud | |
KR20240149371A (en) | Cluster-based training method and apparatus, electronic device and storage medium | |
Nair et al. | Overload prediction and avoidance for maintaining optimal working condition in a fog node | |
CN111124644A (en) | Method, device and system for determining task scheduling resources | |
Yu et al. | Integrating cognition cost with reliability QoS for dynamic workflow scheduling using reinforcement learning | |
EP4235424A1 (en) | Resource control method for function computing, device, and medium | |
CN113722079B (en) | Task scheduling distribution method, device, equipment and medium based on target application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |