CN117149423A - Scheduling method and device based on distributed cluster - Google Patents

Scheduling method and device based on distributed cluster Download PDF

Info

Publication number
CN117149423A
CN117149423A CN202311114893.8A CN202311114893A CN117149423A CN 117149423 A CN117149423 A CN 117149423A CN 202311114893 A CN202311114893 A CN 202311114893A CN 117149423 A CN117149423 A CN 117149423A
Authority
CN
China
Prior art keywords
resource object
object model
distributed cluster
relation information
affinity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311114893.8A
Other languages
Chinese (zh)
Inventor
顾欣
王鹏培
凌晨
程鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311114893.8A priority Critical patent/CN117149423A/en
Publication of CN117149423A publication Critical patent/CN117149423A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a dispatching method and a dispatching device based on a distributed cluster, which can be used in the financial field or other fields, and the method comprises the following steps: m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M; clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein K is the number of preset clustering centers; determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively; and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information. The application can improve the flexibility and self-adaption capability of the configuration of the affinity relation information and the anti-affinity relation information of the resource object model, further can dispatch efficiency, improve the resource utilization rate and stability of the cluster and reduce communication delay.

Description

Scheduling method and device based on distributed cluster
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a scheduling method and apparatus based on a distributed cluster.
Background
Distributed cloud platform is already mainstream nowadays, kubernetes is taken as a container arranging system, and can automatically deploy, expand and manage a containerized application program, and is taken as one of the mainstream technologies of the cloud platform. Pod is the smallest deployable computational unit that can be created and managed in Kubernetes, is the smallest resource object created or deployed by a user in a resource object model, and is a combination of one or more containers. Pod scheduling affects the efficiency of deployment and migration of applications included in the Pod. In large-scale clusters, affinity rules and anti-affinity rules between Pod are critical to Pod scheduling.
The existing Pod affinity configuration mode mainly depends on static rules, manually configured Pod affinity and anti-affinity rules, so that the Pod can be ensured to be deployed on a specific node according to expectations, but when a complex service scene or application feature record changes, the problem of low flexibility and self-adaptation capability exists, and the resource utilization rate, communication delay, stability and the like of kuuberes clusters are affected.
Disclosure of Invention
Aiming at least one problem in the prior art, the application provides a dispatching method and a dispatching device based on a distributed cluster, which can improve the flexibility and the self-adaption capability of the configuration of affinity relation information and anti-affinity relation information of a resource object model, further can dispatch efficiency, improve the resource utilization rate and stability of the cluster and reduce communication delay.
In order to solve the technical problems, the application provides the following technical scheme:
in a first aspect, the present application provides a scheduling method based on a distributed cluster, including:
m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M;
clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, after the scheduling of the resource object model in the target distributed cluster is completed according to the affinity relationship information and the anti-affinity relationship information, the method further comprises:
acquiring current effect evaluation data of the target distributed cluster;
if the effect evaluation data do not meet the preset effect evaluation conditions, adjusting parameter values in the global optimization algorithm, and carrying out clustering processing on the M application feature records based on the global optimization algorithm again based on the adjusted global optimization algorithm to obtain resource object model groups corresponding to the K business scene categories respectively;
Determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, the distributed cluster-based scheduling method further includes:
and if the effect evaluation data meets the preset effect evaluation condition, determining that the evaluation is passed, and stopping the current operation.
In one embodiment, the clustering processing based on the global optimization algorithm is performed on the M application feature records to obtain resource object model groups corresponding to the K service scene categories, where the clustering processing includes:
generating K clustering centers by using the global optimization algorithm;
dividing M application feature records into K groups of feature record groups, wherein each group of feature record groups comprises a plurality of application feature records, and the clustering centers are in one-to-one correspondence with the feature record groups;
setting a threshold value for enabling the sum value of the similarity between each cluster center and each application feature record in the corresponding feature record group to be larger than the similarity sum value as an optimization target of the global optimization algorithm;
And continuously clustering the application feature records according to the similarity between the application feature records and the clustering center, and continuously adjusting the clustering center according to a clustering result until the optimization target of the global optimization algorithm is met, so as to obtain K groups of resource object model groups, wherein each resource object model group comprises a plurality of resource object models.
In one embodiment, the determining the affinity relation information and the anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively includes:
affinity relationship information is configured for resource object models belonging to the same resource object model group, and anti-affinity relationship information is configured for resource object models belonging to different resource object model groups.
In one embodiment, the obtaining M application feature records of N resource object models in the target distributed cluster includes:
and acquiring M application feature records of N resource object models in the target distributed cluster at fixed time.
In one embodiment, the adjusting the parameter value in the global optimization algorithm if the effect evaluation data does not meet a preset effect evaluation condition includes:
If the resource utilization rate is smaller than the resource utilization rate threshold value or the communication delay exceeds the communication delay threshold value in the target distributed cluster, adjusting parameter values in the global optimization algorithm, wherein the parameter values comprise: values of annealing initiation temperature, termination temperature, and cooling rate.
In a second aspect, the present application provides a scheduling apparatus based on a distributed cluster, including:
the first acquisition module is used for acquiring M application feature records of N resource object models in the target distributed cluster, wherein N is less than or equal to M;
the clustering module is used for carrying out clustering processing on the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
the determining module is used for determining affinity relation information and anti-affinity relation information of the resource object models according to the resource object model groups corresponding to the K business scenario categories respectively;
and the first scheduling module is used for completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, the distributed cluster-based scheduling apparatus further includes:
the second acquisition module is used for acquiring the current effect evaluation data of the target distributed cluster;
the adjusting module is used for adjusting the parameter values in the global optimization algorithm if the effect evaluation data do not meet the preset effect evaluation conditions, and carrying out clustering processing on the M application feature records based on the global optimization algorithm based on the adjusted global optimization algorithm again to obtain resource object model groups corresponding to the K business scene categories respectively;
the rule determining module is used for determining affinity relation information and anti-affinity relation information of the resource object models according to the resource object model groups corresponding to the K business scenario categories respectively;
and the second scheduling module is used for completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, the scheduling apparatus based on distributed clusters further includes:
and the suspension module is used for determining that the evaluation passes if the effect evaluation data meets the preset effect evaluation condition, and suspending the current operation.
In one embodiment, the clustering module includes:
the generating unit is used for generating K clustering centers by utilizing the global optimization algorithm;
the dividing unit is used for dividing the M application feature records into K groups of feature record groups, wherein each group of feature record groups comprises a plurality of application feature records, and the clustering centers are in one-to-one correspondence with the feature record groups;
the setting unit is used for setting that the sum value of the similarity between each clustering center and each application feature record in the corresponding feature record group is larger than a similarity sum value threshold value as an optimization target of the global optimization algorithm;
and the clustering unit is used for continuously clustering the application feature records according to the similarity between the application feature records and the clustering center, continuously adjusting the clustering center according to the clustering result until the optimization target of the global optimization algorithm is met, and obtaining K groups of resource object model groups, wherein each resource object model group comprises a plurality of resource object models.
In one embodiment, the determining module includes:
and the configuration unit is used for configuring affinity relation information for the resource object models belonging to the same resource object model group and configuring anti-affinity relation information for the resource object models belonging to different resource object model groups.
In one embodiment, the acquisition module includes:
and the acquisition unit is used for regularly acquiring M application feature records of the N resource object models in the target distributed cluster.
In one embodiment, the adjustment module includes:
the adjusting unit is configured to adjust a parameter value in the global optimization algorithm if there is a node in the target distributed cluster, where the resource utilization rate is smaller than a resource utilization rate threshold, or the communication delay exceeds a communication delay threshold, and the parameter value includes: values of annealing initiation temperature, termination temperature, and cooling rate.
In a third aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the distributed cluster-based scheduling method when the program is executed by the processor.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon computer instructions that when executed implement the distributed cluster-based scheduling method.
As can be seen from the above technical scheme, the present application provides a scheduling method and apparatus based on distributed clusters. Wherein the method comprises the following steps: m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M; clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers; determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively; according to the affinity relation information and the anti-affinity relation information, completing the scheduling of the resource object model in the target distributed cluster, so that the flexibility and the self-adaption capability of the configuration of the affinity relation information and the anti-affinity relation information of the resource object model can be improved, the scheduling efficiency can be further improved, the resource utilization rate and the stability of the cluster can be improved, and the communication delay can be reduced; specifically, the node affinity strategy can be adaptively adjusted according to the service scene, so that the application performance in the cluster is improved, the resource consumption can be reduced, and the system stability is improved; the application performance can be improved: by intelligently identifying service scenes and demands, corresponding affinity rules are automatically generated, the layout among Pods is optimized, resource contention and communication delay are reduced, and application performance is improved; the resource consumption can be reduced: cluster resources are reasonably utilized, unnecessary resource waste is avoided, and overall resource consumption is reduced; system stability can be increased: according to the service scene and the cluster state, the affinity rule is adjusted in real time, and the elasticity and stability of the cluster are improved; easy deployment and maintenance: the method can be used as an expansion plug-in of the Kubernetes cluster to support the dynamic plug-in of the cluster.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a first flow diagram of a distributed cluster-based scheduling method in an embodiment of the present application;
FIG. 2 is a second flow diagram of a distributed cluster-based scheduling method in an embodiment of the present application;
FIG. 3 is a flow chart of a rule optimization process in one example of the application;
FIG. 4 is a flow chart of a distributed cluster-based scheduling method in one example of the application;
FIG. 5 is a schematic diagram of a distributed cluster-based scheduler in an embodiment of the present application;
fig. 6 is a schematic block diagram of a system configuration of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In order to facilitate understanding of the present embodiment, technical terms related to the present embodiment will be described first.
Node is the working Node of the Kubernetes cluster, and can be a physical machine or a virtual machine.
Pod in the Kubernetes cluster is the encapsulation of the container and is the entity that the application runs.
Affinity: for describing the attraction relationship between Pod. When a Pod has affinity, it is more prone to schedule with other pods or nodes that have particular attributes. For example, when certain Pod need to share data or services, they can be scheduled on the same node or topology domain through affinity rules to reduce network latency.
Anti-Affinity: for describing the repulsive relationship between Pod. When a Pod has a counteraffinity, it is more prone to avoid scheduling with other pods or nodes that have specific properties. For example, to increase availability, copies of critical services may be distributed across different nodes or topology domains to reduce the risk of single point failure.
Pod Affinity is a feature in Kubernetes that is used to guide Pod scheduling. Affinity mainly includes two types: affinity and counteraffinity. They allow the scheduling policy of the Pod to be adjusted according to the node tag and the properties of other pods already running on the node.
Node affinity rules: the scheduling policy of Pod is adjusted according to the node label. For example, labels may be set for particular nodes, and then affinity rules may be used to specify that a certain Pod can only be scheduled onto nodes with these labels.
Pod affinity and anti-affinity rules: the scheduling policy of the Pod is adjusted according to the labels of other pods already running on the node. For example, affinity rules for a certain Pod may be set to favor scheduling to nodes where other pods running a particular label are located.
The prior art mainly has the following defects:
1. lack of adaptive capability: affinity configuration methods based on static rules (e.g., node labels, selectors, etc.) have difficulty adapting to dynamic changes in traffic scenarios and cluster states.
2. The maintenance cost is high: the manual configuration of affinity relationship information and anti-affinity relationship information requires some expertise and is difficult to cope with the management requirements of large-scale clusters.
3. The optimization effect is limited: the static configuration method is difficult to realize globally optimal resource utilization and communication delay.
It should be noted that, the dispatching method and device based on the distributed clusters disclosed by the application can be used in the financial technical field, and can also be used in any field except the financial technical field, and the application field of the dispatching method and device based on the distributed clusters disclosed by the application is not limited.
The following examples are presented in detail.
In order to improve flexibility and adaptive capacity of configuration of affinity relation information and anti-affinity relation information of a resource object model, and further improve scheduling efficiency, improve resource utilization rate and stability of a cluster, and reduce communication delay, the embodiment provides a distributed cluster-based scheduling method in which an execution subject is a distributed cluster-based scheduling device, and the distributed cluster-based scheduling device includes, but is not limited to, a server, as shown in fig. 1, and the method specifically includes the following:
step 100: and obtaining M application feature records of N resource object models in the target distributed cluster, wherein N is less than or equal to M.
Specifically, the total number of application feature records of the N resource object models may be M. The target distributed cluster can be a Kubernetes cluster, and the resource object model is Pod in the Kubernetes cluster; the application feature records are feature data of the application in the resource object model; the resource object model and the application feature record can be in one-to-one correspondence or one-to-many correspondence; that is, in the present application example, the resource object model and the application may have a one-to-one correspondence. Each application feature record may include: the infrastructure of the application, i.e. service type, read-write ratio, access frequency, dependency, belonging project, team and priority. The change in cluster state may be determined when the application feature record of the resource object model changes.
Step 200: and carrying out clustering processing based on a global optimization algorithm on the M application feature records to obtain resource object model groups corresponding to the K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers. The number of preset cluster centers can be set according to practical situations, and the application is not limited to this. M, N, K can each be an integer greater than zero.
Specifically, each application feature record and a cluster model based on a global optimization algorithm can be applied to divide the resource object models into a plurality of groups, wherein the cluster model comprises: k-means, DBSCAN, etc.; the global optimization algorithm can be a genetic algorithm, a simulated annealing algorithm or the like; the clustering quantity can be set, a global optimization algorithm is utilized to generate clustering centers with the set clustering quantity, and the clustering centers are multidimensional vectors; dividing each application feature record into feature record groups with set clustering quantity, wherein each feature record group comprises application feature records of a plurality of resource object models, and each clustering center corresponds to one group of feature record groups; setting a threshold value for making the sum value of the similarity between each cluster center and each application feature record in the corresponding feature record group larger than the similarity sum value as an optimization target of the simulated annealing algorithm. And continuously clustering the application feature records according to the optimization target of the simulated annealing algorithm and the similarity between each application feature record and each clustering center, continuously adjusting the clustering centers according to the clustering result, and enabling the similarity between the clustering objects in each resource object model group and the clustering centers in each clustering to be the largest until each clustering center is not changed, so as to obtain a set number of resource object model groups, wherein each resource object model group comprises a plurality of resource object models. The service scene categories can be divided into a large-flow video service, a high-sensitivity account-related service, a basic storage service and the like.
Step 300: and determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories.
Specifically, the resource object model group may be in one-to-one correspondence with the affinity relationship information. The resource object model belonging to the same affinity rule may be determined to have an affinity relationship, and the resource object model belonging to the same anti-affinity rule may be determined to have an anti-affinity relationship.
Step 400: and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
Specifically, each resource object model with affinity relationship can be scheduled to the same Node, each resource object model with anti-affinity relationship is scheduled to different Node, and resource object model scheduling in the target distributed cluster is completed. Before migrating the Pod, i.e. before scheduling the Pod, it is ensured that it has processed the current request and no more new requests are received, which can be achieved by setting an graceful stop timeout time of Kubernetes. For the application needing data migration, the data synchronization between the source and the target Pod is ensured to be migrated after the completion. In the migration process, the configuration of the load equalizer is adjusted, and the traffic is transferred from the old Pod to the new Pod, so that the user request is ensured not to be interrupted.
In order to improve the reliability of the current effect evaluation of the target distributed cluster, and further improve the reliability of the resource object model scheduling, as shown in fig. 2, in one embodiment, after step 400, the method further includes:
step 500: and acquiring current effect evaluation data of the target distributed cluster.
Specifically, the effect evaluation data may include: resource utilization of the node, communication delay data, and the like.
Step 600: and if the effect evaluation data does not meet the preset effect evaluation condition, adjusting the parameter value in the global optimization algorithm, and carrying out clustering processing on the M application feature records based on the global optimization algorithm again based on the adjusted global optimization algorithm to obtain resource object model groups corresponding to the K business scene categories respectively.
In order to improve the resource utilization rate and the reliability of the communication delay monitoring, and further improve the accuracy of the scheduling, if the effect evaluation data in step 600 does not meet the preset effect evaluation condition, adjusting the parameter value in the global optimization algorithm may include: if the resource utilization rate is smaller than the resource utilization rate threshold value or the communication delay exceeds the communication delay threshold value in the target distributed cluster, adjusting parameter values in the global optimization algorithm, wherein the parameter values comprise: values of annealing initiation temperature, termination temperature, and cooling rate.
Specifically, if there is a node in the target distributed cluster where the resource utilization rate is less than the resource utilization rate threshold, or the communication delay exceeds the communication delay threshold, it may be determined that the current affinity relationship information and the anti-affinity relationship information have poor effects, and the parameter values in the global optimization algorithm are adjusted. The parameters adjusted in the global optimization algorithm may include: annealing start temperature, end temperature, cooling speed, etc. By adjusting the parameter values in the global optimization algorithm, the latest affinity relation information and anti-affinity relation information can be obtained, and scheduling of the resource object model in the target distributed cluster is completed again. In order to improve the efficiency of the current effect evaluation of the target distributed cluster, if the effect evaluation data meets the preset effect evaluation condition, the evaluation can be determined to pass, and the current operation is stopped.
Step 700: and determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories.
Step 800: and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In particular, in order to avoid the influence of the resource object model scheduling on the running application, the scheduling of the resource object model may be completed in batches within a preset period of time, where the preset period of time may be set according to the actual situation, and the present application is not limited to this, for example, an off-peak period of time.
To further improve the reliability of determining the set of resource object models, in one embodiment of the present application, step 200 includes:
step 201: and generating K clustering centers by using the global optimization algorithm.
Step 202: dividing M application feature records into K groups of feature record groups, wherein each group of feature record groups comprises a plurality of application feature records, and the clustering centers are in one-to-one correspondence with the feature record groups.
Step 203: and setting the sum value of the similarity between each cluster center and each application feature record in the corresponding feature record group to be larger than the similarity sum value threshold value as an optimization target of the global optimization algorithm.
Step 204: and continuously clustering the application feature records according to the similarity between the application feature records and the clustering center, and continuously adjusting the clustering center according to a clustering result until the optimization target of the global optimization algorithm is met, so as to obtain K groups of resource object model groups, wherein each resource object model group comprises a plurality of resource object models.
To improve the reliability of determining affinity relationship information and anti-affinity relationship information, in one embodiment, step 300 includes: affinity relationship information is configured for resource object models belonging to the same resource object model group, and anti-affinity relationship information is configured for resource object models belonging to different resource object model groups.
To increase the flexibility and degree of automation of resource object model scheduling, in one embodiment, step 100 includes: and acquiring M application feature records of N resource object models in the target distributed cluster at fixed time.
The invention provides an application example of a dispatching method based on a distributed cluster, which comprises the following specific steps:
1) And (3) identifying a service scene: a business scenario responsible for identifying applications deployed by a user in a Kubernetes cluster, the business scenario comprising: video services, such as live broadcast; high frequency continuous clicking small message such as second killing.
Step 1: collecting application metadata: metadata for deployed applications is collected from the Kubernetes cluster, including but not limited to: container resource requirements (CPU, memory), persistent storage requirements, access patterns, dependencies, application labels and notes, and the like.
Step 2: infrastructure as a service type identification: analyzing the resource requirements, analyzing the collected container resource requirements, and identifying computationally, memory or network intensive characteristics of the application. For example, an application may be considered computationally intensive if it has a high CPU resource requirement and a relatively low memory and network requirement.
Step 3: read-write ratio, access frequency identification: by analyzing the access mode, the characteristics of the application such as the read-write proportion and the access frequency can be identified. For example, an application that is primarily read and has a high access frequency may have high requirements for low latency access.
Step 4: and (3) dependency relationship identification: and analyzing the dependency relationship, and analyzing the dependency relationship among the applications to identify application groups which are mutually dependent, independent or have specific sequence requirements. For example, the front-end application depends on the services provided by the back-end application, in which case there may be a high communication frequency and low latency requirement between the two applications.
Step 5: additional attribute and feature identification: by analyzing the application tags and notes, additional attributes and features of the application, such as belonging projects, teams, priorities, etc., may be obtained by analyzing the application tag and note information. This helps to further refine the traffic scene recognition.
Step 6: clustering service scene features: and according to the analysis result, clustering applications with similar characteristics by using a clustering algorithm (such as K-means, DBSCAN and the like) to form different service scene categories. For example, applications requiring computationally intensive, low latency access may be categorized as one class, applications requiring memory intensive, high access frequency may be categorized as another class, and the above analysis results may be equivalent to the application characteristic data described above.
2) And (5) rule generation.
Step 7: and generating a characteristic description of the service scene, and generating the characteristic description of each service scene according to the clustering result. The description may include: scene type (computationally intensive, storage intensive, etc.), resource demand characteristics, access pattern characteristics, dependency characteristics, etc.
Step 8: according to the identified business scenario feature results, the comprehensive calculation (genetic algorithm, simulated annealing algorithm and the like) generates corresponding Pod affinity and anti-affinity rules, and the functions realized by the Pod affinity and anti-affinity rules can be equivalent to the functions realized by the affinity relation information and the anti-affinity relation information of the resource object model.
Step 9: the generated Pod affinity and anti-affinity rules are applied to the Pod scheduling policy in the Kubernetes cluster.
3) Rules apply the adjustment.
Step 10: and adjusting the affinity rule in real time according to the cluster state and the service scene change to optimize the Pod layout.
Step 11: and monitoring the state of the cluster, and collecting information such as the resource use condition, the load condition, the network condition and the like of each pod in the cluster in real time so as to know the running state of the whole cluster.
Step 12: and detecting the service scene change, and identifying whether the service scene of the application changes, such as adding, deleting or updating the application, by analyzing the change of the application metadata.
Step 13: pod affinity and anti-affinity rule effects are evaluated, and based on the current cluster state and business scenario, the effects of the current Pod affinity and anti-affinity rules are evaluated, for example, whether the expected goals of resource utilization, communication delay, etc., are reached.
Step 14: optimization strategy calculation, if the evaluation result shows that the current affinity rule is not good in effect, a new Pod affinity and anti-affinity rule can be calculated by using an optimization algorithm (such as a genetic algorithm, a simulated annealing algorithm and the like). The optimization algorithm calculates the information such as the dependency relationship among the applications according to the cluster state, the business scene and the like. As shown in fig. 3, in one example the optimization rule process includes: metadata collection, analysis of resource requirements, analysis of access patterns, analysis of dependencies, analysis of application tags and notes; clustering service features; describing service characteristics; rule calculation; applying rules; monitoring and evaluating rule effects; and optimizing the rule.
Step 15: rule adjustment policies after determining new Pod affinity and anti-affinity rules, corresponding rule adjustment policies need to be formulated. Policies should minimize the impact on the application being run. For example, during off-peak hours, batch adjustments, etc.
Step 16: pod migration, a.graceful stop: before migrating the pod, it is ensured that it has processed the current request and no more new requests are accepted. This can be achieved by setting an elegant stop timeout time of Kubernetes. b. Data synchronization: for the application needing data migration, the data synchronization between the source and the target pod is ensured to be migrated after the completion. c. Load balancing adjustment: during the migration process, the configuration of the load balancer is adjusted, and the traffic is transferred from the old pod to the new pod, so that the user request is ensured not to be interrupted.
Step 17: and applying new Pod affinity and anti-affinity rules, applying the new affinity rules to the clusters according to the rule adjustment strategy, and re-performing Pod scheduling.
As shown in fig. 4, the scheduling method based on the distributed clusters in one example includes: metadata collection, analysis of resource requirements, analysis of access patterns, analysis of dependencies, analysis of application tags and notes; monitoring/evaluating clusters; clustering features; describing characteristics; policy calculation; policy application.
In order to improve flexibility and adaptive capacity of configuration of affinity relation information and anti-affinity relation information of a resource object model and further improve scheduling efficiency, improve resource utilization rate and stability of a cluster and reduce communication delay, the application provides an embodiment of a distributed cluster-based scheduling device for implementing all or part of content in the distributed cluster-based scheduling method, referring to fig. 5, where the distributed cluster-based scheduling device specifically includes:
the first acquisition module 01 is used for acquiring M application feature records of N resource object models in the target distributed cluster, wherein N is less than or equal to M;
the clustering module 02 is used for carrying out clustering processing on the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
a determining module 03, configured to determine affinity relationship information and anti-affinity relationship information of the resource object model according to the resource object model groups corresponding to the K service scenario categories respectively;
And the first scheduling module 04 is used for completing resource object model scheduling in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, the distributed cluster-based scheduling apparatus further includes:
the second acquisition module is used for acquiring the current effect evaluation data of the target distributed cluster;
the adjusting module is used for adjusting the parameter values in the global optimization algorithm if the effect evaluation data do not meet the preset effect evaluation conditions, and carrying out clustering processing on the M application feature records based on the global optimization algorithm based on the adjusted global optimization algorithm again to obtain resource object model groups corresponding to the K business scene categories respectively;
the rule determining module is used for determining affinity relation information and anti-affinity relation information of the resource object models according to the resource object model groups corresponding to the K business scenario categories respectively;
and the second scheduling module is used for completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
In one embodiment, the scheduling apparatus based on distributed clusters further includes:
And the suspension module is used for determining that the evaluation passes if the effect evaluation data meets the preset effect evaluation condition, and suspending the current operation.
In one embodiment, the clustering module includes:
the generating unit is used for generating K clustering centers by utilizing the global optimization algorithm;
the dividing unit is used for dividing the M application feature records into K groups of feature record groups, wherein each group of feature record groups comprises a plurality of application feature records, and the clustering centers are in one-to-one correspondence with the feature record groups;
the setting unit is used for setting that the sum value of the similarity between each clustering center and each application feature record in the corresponding feature record group is larger than a similarity sum value threshold value as an optimization target of the global optimization algorithm;
and the clustering unit is used for continuously clustering the application feature records according to the similarity between the application feature records and the clustering center, continuously adjusting the clustering center according to the clustering result until the optimization target of the global optimization algorithm is met, and obtaining K groups of resource object model groups, wherein each resource object model group comprises a plurality of resource object models.
In one embodiment, the determining module includes:
and the configuration unit is used for configuring affinity relation information for the resource object models belonging to the same resource object model group and configuring anti-affinity relation information for the resource object models belonging to different resource object model groups.
In one embodiment, the acquisition module includes:
and the acquisition unit is used for regularly acquiring M application feature records of the N resource object models in the target distributed cluster.
In one embodiment, the adjustment module includes:
the adjusting unit is configured to adjust a parameter value in the global optimization algorithm if there is a node in the target distributed cluster, where the resource utilization rate is smaller than a resource utilization rate threshold, or the communication delay exceeds a communication delay threshold, and the parameter value includes: values of annealing initiation temperature, termination temperature, and cooling rate.
The embodiment of the distributed cluster-based scheduling apparatus provided in the present disclosure may be specifically used to execute the processing flow of the embodiment of the distributed cluster-based scheduling method, and the functions thereof are not described herein again, and may refer to the detailed description of the embodiment of the distributed cluster-based scheduling method.
In order to improve flexibility and adaptive capacity of configuration of affinity relation information and anti-affinity relation information of a resource object model and further improve scheduling efficiency, improve resource utilization rate and stability of clusters and reduce communication delay, the application provides an embodiment of an electronic device for implementing all or part of contents in the scheduling method based on distributed clusters, wherein the electronic device specifically comprises the following contents:
A processor (processor), a memory (memory), a communication interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete communication with each other through the bus; the communication interface is used for realizing information transmission between the dispatching device based on the distributed cluster and related equipment such as a user terminal; the electronic device may be a desktop computer, a tablet computer, a mobile terminal, etc., and the embodiment is not limited thereto. In this embodiment, the electronic device may be implemented with reference to an embodiment for implementing the distributed cluster-based scheduling method and an embodiment for implementing the distributed cluster-based scheduling apparatus, and the contents thereof are incorporated herein and are not repeated herein.
Fig. 6 is a schematic block diagram of a system configuration of an electronic device 9600 according to an embodiment of the present application. As shown in fig. 6, the electronic device 9600 may include a central processor 9100 and a memory 9140; the memory 9140 is coupled to the central processor 9100. Notably, this fig. 6 is exemplary; other types of structures may also be used in addition to or in place of the structures to implement telecommunications functions or other functions.
In one or more embodiments of the application, the scheduling functionality of the distributed clusters may be integrated into the central processor 9100. The central processor 9100 may be configured to perform the following control:
step 100: m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M;
step 200: clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
step 300: determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
step 400: and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
As can be seen from the above description, the electronic device provided by the embodiment of the present application can improve flexibility and adaptive capability of configuration of affinity relation information and anti-affinity relation information of a resource object model, further improve scheduling efficiency, improve resource utilization rate and stability of a cluster, and reduce communication delay.
In another embodiment, the distributed cluster-based scheduling apparatus may be configured separately from the central processor 9100, for example, the distributed cluster-based scheduling apparatus may be configured as a chip connected to the central processor 9100, and the distributed cluster scheduling function is implemented under the control of the central processor.
As shown in fig. 6, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 need not include all of the components shown in fig. 6; in addition, the electronic device 9600 may further include components not shown in fig. 6, and reference may be made to the related art.
As shown in fig. 6, the central processor 9100, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, which central processor 9100 receives inputs and controls the operation of the various components of the electronic device 9600.
The memory 9140 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information about failure may be stored, and a program for executing the information may be stored. And the central processor 9100 can execute the program stored in the memory 9140 to realize information storage or processing, and the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. The power supply 9170 is used to provide power to the electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, but not limited to, an LCD display.
The memory 9140 may be a solid state memory such as Read Only Memory (ROM), random Access Memory (RAM), SIM card, etc. But also a memory which holds information even when powered down, can be selectively erased and provided with further data, an example of which is sometimes referred to as EPROM or the like. The memory 9140 may also be some other type of device. The memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 storing application programs and function programs or a flow for executing operations of the electronic device 9600 by the central processor 9100.
The memory 9140 may also include a data store 9143, the data store 9143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers of the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, address book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. A communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, as in the case of conventional mobile communication terminals.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, etc., may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and to receive audio input from the microphone 9132 to implement usual telecommunications functions. The audio processor 9130 can include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100 so that sound can be recorded locally through the microphone 9132 and sound stored locally can be played through the speaker 9131.
As can be seen from the above description, the electronic device provided by the embodiment of the present application can improve flexibility and adaptive capability of the affinity relationship information and the anti-affinity relationship information configuration of the resource object model, further improve scheduling efficiency, improve resource utilization rate and stability of the cluster, and reduce communication delay.
The embodiment of the present application further provides a computer readable storage medium capable of implementing all the steps in the distributed cluster-based scheduling method in the above embodiment, where the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements all the steps in the distributed cluster-based scheduling method in the above embodiment, for example, the processor implements the following steps when executing the computer program:
step 100: m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M;
step 200: clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
step 300: determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
step 400: and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
As can be seen from the above description, the computer readable storage medium provided by the embodiments of the present application can improve flexibility and adaptive capacity of configuration of affinity relation information and anti-affinity relation information of a resource object model, further improve scheduling efficiency, improve resource utilization rate and stability of a cluster, and reduce communication delay.
The embodiments of the method of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment mainly describes differences from other embodiments. For relevance, see the description of the method embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present application have been described in detail with reference to specific examples, which are provided to facilitate understanding of the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A distributed cluster-based scheduling method, comprising:
m application feature records of N resource object models in the target distributed cluster are obtained, wherein N is less than or equal to M;
clustering the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
Determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
and completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
2. The distributed cluster-based scheduling method according to claim 1, further comprising, after the completion of the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information:
acquiring current effect evaluation data of the target distributed cluster;
if the effect evaluation data do not meet the preset effect evaluation conditions, adjusting parameter values in the global optimization algorithm, and carrying out clustering processing on the M application feature records based on the global optimization algorithm again based on the adjusted global optimization algorithm to obtain resource object model groups corresponding to the K business scene categories respectively;
determining affinity relation information and anti-affinity relation information of the resource object model according to the resource object model groups corresponding to the K business scenario categories respectively;
And completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
3. The distributed cluster-based scheduling method of claim 2, further comprising:
and if the effect evaluation data meets the preset effect evaluation condition, determining that the evaluation is passed, and stopping the current operation.
4. The distributed cluster-based scheduling method according to claim 1, wherein the clustering processing based on the global optimization algorithm is performed on the M application feature records to obtain resource object model groups corresponding to the K traffic scene categories, respectively, and the method comprises:
generating K clustering centers by using the global optimization algorithm;
dividing M application feature records into K groups of feature record groups, wherein each group of feature record groups comprises a plurality of application feature records, and the clustering centers are in one-to-one correspondence with the feature record groups;
setting a threshold value for enabling the sum value of the similarity between each cluster center and each application feature record in the corresponding feature record group to be larger than the similarity sum value as an optimization target of the global optimization algorithm;
and continuously clustering the application feature records according to the similarity between the application feature records and the clustering center, and continuously adjusting the clustering center according to a clustering result until the optimization target of the global optimization algorithm is met, so as to obtain K groups of resource object model groups, wherein each resource object model group comprises a plurality of resource object models.
5. The distributed cluster-based scheduling method according to claim 1, wherein determining affinity relationship information and anti-affinity relationship information of the resource object model according to the resource object model groups corresponding to the K traffic scenario categories, respectively, comprises:
affinity relationship information is configured for resource object models belonging to the same resource object model group, and anti-affinity relationship information is configured for resource object models belonging to different resource object model groups.
6. The distributed cluster-based scheduling method according to claim 1, wherein the obtaining M application feature records of N resource object models in the target distributed cluster includes:
and acquiring M application feature records of N resource object models in the target distributed cluster at fixed time.
7. The distributed cluster-based scheduling method according to claim 2, wherein adjusting the parameter value in the global optimization algorithm if the effect evaluation data does not satisfy a preset effect evaluation condition includes:
if the resource utilization rate is smaller than the resource utilization rate threshold value or the communication delay exceeds the communication delay threshold value in the target distributed cluster, adjusting parameter values in the global optimization algorithm, wherein the parameter values comprise: values of annealing initiation temperature, termination temperature, and cooling rate.
8. A distributed cluster-based scheduling apparatus, comprising:
the first acquisition module is used for acquiring M application feature records of N resource object models in the target distributed cluster, wherein N is less than or equal to M;
the clustering module is used for carrying out clustering processing on the M application feature records based on a global optimization algorithm to obtain resource object model groups corresponding to K business scene categories respectively, wherein the sum of the number of the resource object models in all the resource object model groups is N, and K is the number of preset clustering centers;
the determining module is used for determining affinity relation information and anti-affinity relation information of the resource object models according to the resource object model groups corresponding to the K business scenario categories respectively;
and the first scheduling module is used for completing the scheduling of the resource object model in the target distributed cluster according to the affinity relation information and the anti-affinity relation information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the distributed cluster-based scheduling method of any one of claims 1 to 7 when executing the program.
10. A computer readable storage medium having stored thereon computer instructions, which when executed by a processor implement the distributed cluster based scheduling method of any one of claims 1 to 7.
CN202311114893.8A 2023-08-31 2023-08-31 Scheduling method and device based on distributed cluster Pending CN117149423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311114893.8A CN117149423A (en) 2023-08-31 2023-08-31 Scheduling method and device based on distributed cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311114893.8A CN117149423A (en) 2023-08-31 2023-08-31 Scheduling method and device based on distributed cluster

Publications (1)

Publication Number Publication Date
CN117149423A true CN117149423A (en) 2023-12-01

Family

ID=88898229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311114893.8A Pending CN117149423A (en) 2023-08-31 2023-08-31 Scheduling method and device based on distributed cluster

Country Status (1)

Country Link
CN (1) CN117149423A (en)

Similar Documents

Publication Publication Date Title
CN113377540B (en) Cluster resource scheduling method and device, electronic equipment and storage medium
US11704144B2 (en) Creating virtual machine groups based on request
CN108804227B (en) Method for computing-intensive task unloading and optimal resource allocation based on mobile cloud computing
CN107423085B (en) Method and apparatus for deploying applications
CN107430528A (en) Opportunistic resource migration is placed with optimizing resource
CN109271106B (en) Message storage method, message reading method, message storage device, message reading device, server and storage medium
JP2018515844A (en) Data processing method and system
CN104243405A (en) Request processing method, device and system
CN105812175B (en) Resource management method and resource management equipment
CN111694517B (en) Distributed data migration method, system and electronic equipment
CN106856438A (en) A kind of method of Network instantiation, device and NFV systems
CN111858050B (en) Server cluster hybrid deployment method, cluster management node and related system
CN115297008B (en) Collaborative training method, device, terminal and storage medium based on intelligent computing network
CN113806075A (en) Method, device and equipment for container hot updating CPU core of kubernets cluster and readable medium
CN108833592A (en) Cloud host schedules device optimization method, device, equipment and storage medium
CN112995303A (en) Cross-cluster scheduling method and device
CN108875035A (en) The date storage method and relevant device of distributed file system
CN111444309B (en) System for learning graph
CN111597035A (en) Simulation engine time advancing method and system based on multiple threads
CN112953993A (en) Resource scheduling method, device, network system and storage medium
CN112527450B (en) Super-fusion self-adaptive method, terminal and system based on different resources
CN112396511A (en) Distributed wind control variable data processing method, device and system
CN112631716A (en) Database container scheduling method and device, electronic equipment and storage medium
CN115002215B (en) Cloud government enterprise oriented resource allocation model training method and resource allocation method
CN117149423A (en) Scheduling method and device based on distributed cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination