CN117349035A - Workload scheduling method, device, equipment and storage medium - Google Patents

Workload scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN117349035A
CN117349035A CN202311650829.1A CN202311650829A CN117349035A CN 117349035 A CN117349035 A CN 117349035A CN 202311650829 A CN202311650829 A CN 202311650829A CN 117349035 A CN117349035 A CN 117349035A
Authority
CN
China
Prior art keywords
target workload
workload
container
image
container image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311650829.1A
Other languages
Chinese (zh)
Other versions
CN117349035B (en
Inventor
何俊桦
李学峰
张铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Cloud Computing Technology Co ltd
Original Assignee
Zhongdian Cloud Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Cloud Computing Technology Co ltd filed Critical Zhongdian Cloud Computing Technology Co ltd
Priority to CN202311650829.1A priority Critical patent/CN117349035B/en
Publication of CN117349035A publication Critical patent/CN117349035A/en
Application granted granted Critical
Publication of CN117349035B publication Critical patent/CN117349035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure relates to a workload scheduling method, device, equipment and storage medium, wherein the method comprises the following steps: in the process of creating the target workload, adding a scheduling gate for the target workload under the condition that the target workload is detected to meet the preset condition; checking the architecture type supported by the container image for each container image in the target workload if the presence of the dispatch gate on the target workload is detected; in the case that a compatible architecture type is determined based on the architecture types supported by each container image in the target workload, a node affinity tag is added to the target workload based on the compatible architecture type, and a scheduling gate is removed for the target workload. According to the embodiment of the disclosure, the target workload can be scheduled to the nodes compatible with the architecture, so that the problem that the target workload cannot work normally is avoided.

Description

Workload scheduling method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a workload scheduling method, device, equipment and storage medium.
Background
In recent years, with the development of computer technology, various cloud manufacturers have been providing multi-architecture clusters, i.e., nodes including different architectures in the clusters, for example, kubernetes clusters gradually develop from an initial single architecture (x 86 architecture) to multi-architectures (x 86 and arm architectures, or x86 and rics-v architectures, etc.).
However, with the advent of multi-architecture clusters, workload scheduling is also facing new challenges, because workload loads may be scheduled to nodes that are not architected, e.g., a container image corresponding to a workload may be adapted to an x86 node, but when scheduling the workload, the workload may be scheduled to a non-x 86 node, resulting in a failure of the container image corresponding to the workload.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, embodiments of the present disclosure provide a workload scheduling method, device, equipment, and storage medium.
A first aspect of an embodiment of the present disclosure provides a method for scheduling a workload, the method including:
in the process of creating the target workload, adding a scheduling gate for the target workload under the condition that the target workload is detected to meet the preset condition;
Checking the architecture type supported by the container image for each container image in the target workload if the presence of the dispatch gate on the target workload is detected;
under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, adding a node affinity tag for the target workload based on the compatible architecture type, and removing a scheduling gate for the target workload, so that the master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
A second aspect of an embodiment of the present disclosure provides a scheduling apparatus of a workload, the apparatus including:
the first adding module is used for adding a scheduling gate for the target workload when the target workload is detected to meet the preset condition in the creating process of the target workload;
a first checking module for checking, for each container image in the target workload, a type of architecture supported by the container image if it is detected that a dispatch gate exists on the target workload;
and the second adding module is used for adding a node affinity tag for the target workload based on the compatible architecture type under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, and removing a scheduling gate for the target workload, so that the main scheduler schedules the target workload based on the node affinity tag under the condition that the main scheduler monitors that the scheduling gate does not exist on the target workload.
A third aspect of the disclosed embodiments provides an electronic device, the server comprising: a processor and a memory, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of the first aspect described above.
A fourth aspect of the disclosed embodiments provides a computer readable storage medium having stored therein a computer program which, when executed by a processor, can implement the method of the first aspect described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the embodiment of the disclosure, in the process of creating the target workload, a scheduling gate can be added for the target workload under the condition that the target workload is detected to meet the preset condition; checking the architecture type supported by the container image for each container image in the target workload if the presence of the dispatch gate on the target workload is detected; under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, adding a node affinity tag for the target workload based on the compatible architecture type, and removing a scheduling gate for the target workload, so that the master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload. Therefore, by adopting the technical scheme, the scheduling gate can be added for the target workload so that the main scheduler does not schedule the target workload first, and then the node affinity tag is added for the target workload based on the compatible architecture type under the condition that the compatible architecture type of the target workload is determined, so that the main scheduler schedules the target workload based on the node affinity tag under the condition that the scheduling gate does not exist on the target workload, and therefore the problem that the target workload cannot work normally is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a method of scheduling a workload provided by an embodiment of the present disclosure;
FIG. 2 is a workflow diagram of an electronic device in a scheduling process of a workload provided by an embodiment of the present disclosure;
FIG. 3 is a workflow diagram of a slave scheduler in the scheduling of a workload provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a workload scheduler according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The applicant has found through research that with the advent of multi-architecture clusters, the scheduling of workload also faces new challenges, as workload may be scheduled to nodes where the architecture is not adapted. Taking the Kubernetes cluster as an example, the Kubernetes scheduler does not consider whether the architecture supported by its container image and the architecture of the nodes are compatible when scheduling the workload, which also results in the following problems: 1. for migrating a workload from a single architecture to a multi-architecture scenario, such as from an original x86 cluster to a multi-architecture cluster, the workload may be scheduled to non-x 86 nodes when scheduled, which would result in a mirrored container of the workload being potentially inoperable. If all the container images used by the workload are recompiled to support the new architecture, the problems of large operation and maintenance workload, complex operation, easy error and easy interruption of service exist. 2. For a single-architecture environment to expand other architecture node scenes, for example, when a cluster administrator expands an arm architecture node towards an x86 cluster, because an arm architecture is not supported by a container image used by an original workload, if a stain is added to a new arm architecture node, it is ensured that the workload in deployment can be normally and continuously deployed without being scheduled to the arm architecture node, the operation and maintenance difficulty is greatly increased, and the service is also possibly abnormal due to improper operation. In view of this, the embodiments of the present disclosure provide a method, an apparatus, a medium, and an electronic device for scheduling a workload. The workload scheduling method provided by the embodiment of the present disclosure is first described in detail with reference to fig. 1 to 3.
Fig. 1 is a flow chart of a method of scheduling a workload provided by an embodiment of the present disclosure, which may be performed by an electronic device including a slave scheduler and a master scheduler, and in particular, by a slave scheduler in the electronic device. The electronic device may be understood as a device such as a cluster, a mobile phone, a tablet, a notebook, a desktop, a smart tv, a server, etc., by way of example. As shown in fig. 1, the method provided in this embodiment includes the following steps:
s110, in the process of creating the target workload, adding a scheduling gate for the target workload under the condition that the target workload is detected to meet the preset condition.
In the embodiment of the disclosure, the electronic device includes a slave scheduler and a master scheduler, for example, kuube-scheduler is the master scheduler in Kubernetes cluster, and a slave scheduler (hereinafter referred to as multi-scheduler) needs to be added in Kubernetes cluster. The slave scheduler can intercept a creation request of the target workload, so that in the creation process of the target workload, and under the condition that the target workload is detected to meet the preset condition, a scheduling gate is added for the target workload, so that the master scheduler pauses scheduling of the target workload under the condition that the master scheduler monitors that the scheduling gate exists on the target workload. After the master scheduler monitors the created target workload, the master scheduler can detect whether a scheduling gate exists in the target workload, and if the master scheduler detects that the scheduling gate exists in the target workload, the master scheduler pauses scheduling the target workload.
In particular, the target workload may be any workload deployed by a user, wherein the workload is a running application.
Specifically, the schedule gate is a flag for characterizing the workload suspension schedule. For example, scheduling gate is a scheduling gate in Kubernetes clusters.
In some embodiments, the preset condition is null, i.e., during creation of the target workload, no condition is added to the scheduling gate for the target workload. In this manner, each workload deployed by a user may be added to a scheduling gate.
In other embodiments, the preset condition includes that the namespace of the target workload belongs to the target namespace and/or that the tag content of the preset tag of the target workload belongs to the target tag content.
Optionally, the method may further include: in the process of creating the target workload, under the condition that the target workload is detected not to meet the preset condition, a scheduling gate is not added for the target workload, so that the master scheduler schedules the target workload under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
Optionally, before S110, the method further includes: and responding to the configuration operation of the user on the preset conditions, and determining the preset conditions configured by the configuration operation.
Specifically, the specific content of the target namespace, the specific label type of the preset label, and the specific content of the target label content can be set by those skilled in the art according to the actual situation, and the method is not limited herein. For example, in a Kubernetes cluster, for certain Pod (e.g., agents of a particular node) that do not need to be intercepted to add a scheduling gate, the Pod may be filtered from the scheduler by a namespace selector and a label selector, to which the scheduling gate is not added.
It can be understood that by adding the scheduling gate to the target workload when the target workload is detected to meet the preset condition, the user can flexibly set the preset condition, so that the slave scheduler can flexibly add the scheduling gate to the target workload meeting the preset condition, and does not add the scheduling gate to the target workload not meeting the preset condition, so that the target workload is scheduled by the master scheduler as soon as possible.
S120, checking the architecture type supported by the container mirror image for each container mirror image in the target workload when the existence of the dispatch gate on the target workload is monitored.
In the embodiment of the disclosure, after the slave scheduler monitors the created target workload, whether a scheduling gate exists in the target workload or not can be detected, and in the case that the scheduling gate exists in the target workload is detected, the architecture type supported by the container image is checked for each container image in the target workload.
In particular, there are various specific implementations of the "check architecture type supported by container mirroring" described below with respect to a typical example, but this is not to be construed as limiting the embodiments of the present disclosure.
In some embodiments, checking the architecture type supported by the container image includes:
s121, acquiring a history checking result, wherein the history checking result comprises a history container image subjected to history checking and a framework type supported by the history container image.
In particular, a historical container image, i.e., a container image in which the history was checked for supported architecture types.
In particular, the history container image and the specific storage mode of the architecture type supported by the history container image may be set by a person skilled in the art according to practical situations, and are not limited herein, and may be stored by a key value pair, for example.
Specifically, the history checking result may be obtained by downloading, reading a memory, or the like, which is not limited herein.
S122, searching the historical container mirror image which is the same as the container mirror image in the historical checking result.
In one example, S122 may include: and comparing the container images with the historical container images in the historical checking results one by one until the historical container images identical to the container images are retrieved or each historical container image in the historical checking results is compared.
In another example, S122 may include: retrieving the same historical container image as the container image in the historical inspection results, comprising:
retrieving a historical container image which is the same as the image registry of the container image in the historical checking result to obtain a first retrieval result;
under the condition that the first search result is not empty, searching a historical container mirror image which is the same as a mirror image warehouse of the container mirror image in the first search result to obtain a second search result;
under the condition that the second search result is not empty, searching the historical container mirror image which is the same as the mirror image label of the container mirror image in the second search result to obtain a third search result;
under the condition that the third search result is not empty, acquiring the architecture type supported by the historical container mirror image in the third search result, and taking the architecture type as the architecture type supported by the container mirror image;
if the first search result, the second search result, or the third search result is empty, it is determined that the same history container image as the container image is not searched.
Specifically, a container image may be divided into three parts, an image registry, an image repository, and an image tag, respectively. For example, in the Kubernetes cluster, the mirror registry, mirror repository, and mirror labels are Registry, repository and Tag, respectively.
When storing the history checking result, the mirror registry of the history container mirror may be stored in the first level cache, the mirror repository may be stored in the second level cache, and the mirror tag may be stored in the third level cache.
Specifically, the first search result being not empty means that at least one history container image is included therein, and the first search result being empty means that no history container image exists therein. The second search result and the third search result are not described in detail herein.
Specifically, the mirror registry of the historical container mirror in the first search result is the same as the mirror registry of the container mirror. The mirror image registry and the mirror image warehouse of the historical container mirror image in the second search result are the same as the mirror image registry and the mirror image warehouse of the container mirror image. The mirror image registry, the mirror image warehouse and the mirror image label of the historical container mirror image in the third search result are identical to the mirror image registry, the mirror image warehouse and the mirror image label of the container mirror image, namely the mirror image registry, the mirror image warehouse and the mirror image label are identical to each other.
It can be understood that the search of the container mirror image in the three-level cache can reduce the workload of searching the container mirror image in the history checking result and improve the searching efficiency.
S123, under the condition that the historical container mirror image which is the same as the container mirror image is retrieved, the architecture type supported by the historical container mirror image which is the same as the container mirror image is obtained and used as the architecture type supported by the container mirror image.
Specifically, when the same historical container image as the container image is retrieved in the historical inspection result, the supported architecture type is inspected before the container image is described, so that the container image and the supported architecture type thereof can be found in the historical inspection result.
Optionally, checking the architecture type supported by the container image further comprises: s124, under the condition that the historical container mirror image which is the same as the container mirror image is not retrieved, constructing a remote mirror image warehouse access certificate based on an access key of the container mirror image; entering the remote mirror repository based on the remote mirror repository access credential and retrieving in the remote mirror repository the architecture type supported by the container image.
Specifically, the access key is information used to generate remote mirror repository access credentials. In Kubernetes clusters, imagePullSecret is the access key.
Specifically, the remote mirror repository access credential is a credential to access the remote mirror repository.
In one example, retrieving architecture types supported by a container image in a remote image repository includes: and comparing the container images with the to-be-detected container images in the remote image warehouse one by one until the to-be-detected container images identical to the container images are retrieved.
Specifically, each container image in the remote image warehouse is a container image to be inspected.
Optionally, after the architecture types supported by the container image are retrieved from the remote image repository, the container image and the architecture types supported by the container image are saved in the history checking result.
Specifically, the container image is divided into three parts of an image registry, an image warehouse and an image label so as to be stored in a three-level cache mode.
It will be appreciated that storing the container image and its supported architecture types in the history check results enables the architecture types supported by the container image to be quickly retrieved from the retrieval results when the container image is included in a subsequent other workload.
Of course, the historical container image that is the same as the container image in the target workload may not be searched in the search result, but the container image to be searched that is the same as the container image in the target workload may be searched directly in the remote image warehouse, which is not limited by the embodiment of the present disclosure.
And S130, adding a node affinity tag for the target workload based on the compatible architecture type under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, and removing a scheduling gate for the target workload, so that the master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
In the disclosed embodiments, in the event that a compatible architecture type is determined based on the architecture type supported by each container image in the target workload, the slave scheduler may add a node affinity tag for the target workload based on the compatible architecture type and remove the scheduling gate for the target workload. After the master scheduler monitors the target workload, the master scheduler can detect whether a scheduling gate exists in the target workload, and schedule the target workload based on the node affinity tag under the condition that the scheduling gate does not exist in the target workload.
Specifically, the compatible architecture type is an intersection of architecture types supported by the respective container images in the target workload. For example, the target workload includes three container images, and if the architecture type supported by the first container image includes x86, the architecture type supported by the second container image includes x86 and arm, and the architecture type supported by the second container image includes x86 and rics-v, the compatible architecture type is x86; if the architecture type supported by the first container image includes x86, the architecture type supported by the second container image includes arm, and the architecture type supported by the second container image includes x86, then there is no compatible architecture type.
In particular, the node affinity tag is used to indicate which nodes the target workload can be scheduled to, including information indicating that the target workload can be scheduled to nodes of an architecture type that is compatible with the architecture type.
Specifically, after the target workload is removed from the scheduling gate, the master scheduler may select a target node from the plurality of nodes indicated by the node affinity tag based on a preset scheduling policy, and schedule the target workload to the target node. It should be noted that, the preset scheduling policy may be determined by a person skilled in the art according to the actual situation, which is not limited herein.
Optionally, the method further comprises: and removing the scheduling gate for the target workload in the case that the architecture type supported by the at least one container image is not checked or in the case that the compatible architecture type of the target workload is not determined, so that the master scheduler schedules the target workload in the case that the master scheduler monitors that the scheduling gate does not exist on the target workload.
It will be appreciated that inspection errors may be caused by network errors, i.e., the types of architecture supported by the container images cannot be inspected, and that compatible architecture types cannot be determined when the types of architecture supported by at least one container image are not inspected in the target workload. In addition, when the intersection of the architecture types supported by the container images in the target workload is empty, the compatible architecture type cannot be determined. At this time, in order to avoid that the target workload cannot be scheduled for a long time due to the fact that the compatible architecture type cannot be determined for a long time, the scheduling gate can be directly removed for the target workload, so that the master scheduler can schedule the target workload in time under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
According to the embodiment of the disclosure, the scheduling gate can be added for the target workload so that the main scheduler does not schedule the target workload first, and then the node affinity tag is added for the target workload based on the compatible architecture type under the condition that the compatible architecture type of the target workload is determined, so that the main scheduler schedules the target workload based on the node affinity tag under the condition that the scheduling gate does not exist on the target workload, and therefore the problem that the target workload cannot work normally is solved.
The workload scheduling method provided by the embodiment of the present disclosure is described in detail below with reference to a specific example. When the workload scheduling method provided by the embodiment of the present disclosure is applied to Kubernetes clusters, there is no modification to kube-schedulers (i.e. master schedulers), but a multi-path-scheduler (i.e. slave schedulers) is added, using the Pod Scheduling Readiness property of Kubernetes, the scheduling gate can be added when Pod (i.e. minimum workload unit) is created, so as to achieve the purpose of suspending scheduling, and using the Pod Mutable Scheduling Directives property to add node affinity tags conforming to the container mirror architecture to Pod in the suspended scheduling state. Fig. 2 is a workflow diagram of an electronic device in a scheduling process of a workload provided by an embodiment of the present disclosure. Fig. 3 is a workflow diagram of a slave scheduler in the scheduling of a workload provided by an embodiment of the present disclosure. Referring to fig. 2 and 3, the scheduling process of the workload of the electronic device mainly includes the following steps:
1. The target workload is deployed.
2. Intercepting a creation request of a target workload, and adding a scheduling gate for the target workload.
Specifically, when a user deploys a target workload, the created Pod (i.e., target workload) is intercepted by the multi-scheduler (i.e., slave scheduler) and a scheduling gate is added before being created, and then the Pod is admitted and persisted to etcd (i.e., a cloud-native distributed storage database). For some Pods (e.g., agents (i.e., agents) of a particular node need not be intercepted, multiarch-schedule support is filtered by a namespace and tag selector, as shown in the automated pause Schedule section of FIG. 3.
3. And monitoring the target workload, and suspending the target workload when a scheduling door exists on the target workload.
4. The target workload is marked as non-schedulable.
Specifically, kube-scheduler (i.e., master scheduler) monitors Pod, if scheduling gate exists, kube-scheduler will not schedule Pod, i.e., pod is only a common resource at this time and is marked as non-schedulable to enable the user to know; after the subsequent scheduling gate is removed, kube-scheduling resumes scheduling Pod.
5. The target workload is monitored, and in the event that a dispatch gate exists on the target workload, the architecture type supported by each container image in the target workload is checked.
Specifically, the multiple-scheduler monitors the created Pod, and if there is a scheduling gate in the Pod, checks the architecture type supported by each container image in the Pod, otherwise, it indicates that the Pod will be directly scheduled by the kube-scheduler. As shown in fig. 3, when checking the architecture type supported by the container mirror, first, searching whether the last mirror check result exists in the tertiary cache of the container mirror (i.e. the history container mirror) of the history check result, if not, constructing a remote mirror warehouse by using the access key of the Pod to construct a remote mirror warehouse credential to access the remote mirror warehouse of the container mirror in the storage Pod, checking all the architecture types supported by the container mirror, and constructing the tertiary cache according to the container mirror and the architecture supported by the container mirror.
6. A compatible architecture type is determined based on the architecture types supported by each container image in the target workload, and a node affinity tag is added based on the compatible architecture type.
Specifically, the multiple-scheduler takes intersections of the architecture types supported by the various container images in the Pod as compatible architecture types. The Multiarch-schedule adds compatible architecture types that Pod can run, adds kubernetes.io/arch tags (node architecture tags native to Kubernetes) declares supported architecture types. If node affinities are already declared in Pod, because of the logical or relationship between different node affinities, it is necessary to add a compatible architecture tag to each node affinity declared by Pod.
If the multiple-schedule check is in error (e.g., network error, etc.) and the type of structure supported by the container image is not checked, the scheduling gate in the Pod is removed and the Kubernetes scheduler continues to schedule.
7. The dispatch gate on the target workload is removed.
Specifically, the multiarch-scheduler removes scheduling gates in the Pod.
8. And monitoring the target workload, and scheduling based on the node affinity tag under the condition that no scheduling gate exists on the target workload.
Specifically, kube-scheduler schedules Pod to the appropriate node according to node affinity.
9. And adding a scheduling result of the target workload.
The scheduling method of the workload provided by the embodiment of the disclosure can realize advanced automatic pause scheduling, the multiarch-scheduler monitors the creation event of the Pod in real time, and the user-customizable advanced filtering characteristics are realized by matching with a name space, a tag selector and the like, and the advanced automatic pause scheduling is realized by adding scheduling gate. The intelligent sensing workload supporting framework can also be realized by enabling a multi-branch-schedule to monitor update events of Pod in real time (namely adding a scheduling gate), for the Pod which is in suspension scheduling, searching a remote mirror warehouse and matching a framework conforming to a container mirror thereof by a high-performance cache, adding proper node affinity after merging results, and realizing the framework supported by the intelligent sensing workload in a multi-framework cloud native environment. The method can also cache the result of the container mirror image architecture with high performance, and the multi-branch-schedule splits the container mirror image into Registry, repositisource and Tag to construct three-level cache, so that the memory space is greatly saved, and invalid remote call is avoided. Has the following advantages: by utilizing the key characteristics of the Kubernetes, the method is non-invasive to the original members of the Kubernetes, and can be automatically and independently operated and maintained. Smooth migration of single architecture to multiple architectures can be achieved without additional operational costs. The capacity expansion difficulty of the multi-architecture nodes is reduced, pods are automatically intercepted, and the workload is prevented from being scheduled to the nodes of the unsupported architecture. Users deploy workloads in a multi-architecture environment without maintaining the compatibility of the workloads and nodes when considering container image upgrades. The development difficulty of developers is reduced, and the support of multiple architectures is not required to be considered additionally.
Fig. 4 is a schematic structural diagram of a workload scheduling apparatus according to an embodiment of the present disclosure, where the workload scheduling apparatus may be understood as the electronic device or a part of functional modules in the electronic device. As shown in fig. 4, the workload scheduling apparatus 400 includes:
a first adding module 410, configured to add a scheduling gate to a target workload when it is detected that the target workload meets a preset condition in a process of creating the target workload corresponding to the target workload;
a first checking module 420 for checking, for each container image in the target workload, a type of architecture supported by the container image if it is detected that a dispatch gate exists on the target workload;
a second adding module 430, configured to add a node affinity tag to the target workload based on the compatible architecture type, and remove a scheduling gate for the target workload, in a case where a compatible architecture type is determined based on the architecture type supported by each container image in the target workload, so that the master scheduler schedules the target workload based on the node affinity tag if it is detected that there is no scheduling gate on the target workload.
In another embodiment of the present disclosure, the preset condition includes that the namespace of the target workload belongs to a target namespace and/or that the tag content of the preset tag of the target workload belongs to a target tag content.
In yet another embodiment of the present disclosure, the first inspection module 420 includes a first inspection sub-module that inspects the architecture type supported by the container image, the first inspection sub-module including:
the first acquisition unit is used for acquiring a history checking result, wherein the history checking result comprises a history container image subjected to history checking and a framework type supported by the history container image;
a first search unit configured to search the history inspection result for a history container image identical to the container image;
and the second acquisition unit is used for acquiring the architecture type supported by the historical container mirror image which is the same as the container mirror image and taking the architecture type as the architecture type supported by the container mirror image when the historical container mirror image which is the same as the container mirror image is retrieved.
In still another embodiment of the present disclosure, the first retrieving unit is specifically configured to retrieve, from the history checking result, a history container image that is the same as the first level cache of the container image, to obtain a first retrieving result;
Under the condition that the first search result is not empty, searching a historical container mirror image which is the same as the mirror image warehouse of the container mirror image in the first search result to obtain a second search result;
searching a historical container mirror image which is the same as the mirror image label of the container mirror image in the second search result under the condition that the second search result is not empty to obtain a third search result;
under the condition that the third search result is not empty, acquiring the architecture type supported by the historical container image in the third search result, and taking the architecture type as the architecture type supported by the container image;
if the first search result, the second search result, or the third search result is empty, it is determined that the same history container image as the container image is not searched.
In yet another embodiment of the present disclosure, the first inspection sub-module further includes:
a first construction unit configured to construct a remote image repository access credential based on an access key of the container image, in a case where the same historical container image as the container image is not retrieved;
and the second retrieval unit is used for entering a remote mirror warehouse based on the remote mirror warehouse access certificate and retrieving the architecture type supported by the container mirror in the remote mirror warehouse.
In yet another embodiment of the present disclosure, the apparatus further comprises:
a saving module, configured to save the container image and the architecture type supported by the container image in the history checking result after the architecture type supported by the container image is retrieved in the remote image repository.
In yet another embodiment of the present disclosure, the apparatus further comprises:
and the removing module is used for removing the scheduling gate for the target workload under the condition that the architecture type supported by at least one container image is not checked or the compatible architecture type of the target workload is not determined, so that the main scheduler schedules the target workload under the condition that the main scheduler monitors that the scheduling gate does not exist on the target workload.
The device provided in this embodiment can execute the method of any one of the above embodiments, and the execution mode and the beneficial effects thereof are similar, and are not described herein again.
The embodiment of the disclosure also provides an electronic device, which comprises: a memory in which a computer program is stored; a processor for executing the computer program, which when executed by the processor can implement the method of any of the above embodiments.
By way of example, fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now in particular to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
in the process of creating the target workload corresponding to the target workload, adding a scheduling gate for the target workload under the condition that the target workload is detected to meet the preset condition;
checking the architecture type supported by the container image for each container image in the target workload if the presence of the dispatch gate on the target workload is detected;
Under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, adding a node affinity tag for the target workload based on the compatible architecture type, and removing a scheduling gate for the target workload, so that the master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The embodiments of the present disclosure further provide a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, may implement a method according to any one of the foregoing embodiments, and the implementation manner and beneficial effects of the method are similar, and are not described herein again.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of scheduling a workload, comprising:
in the process of creating a target workload, adding a scheduling gate for the target workload under the condition that the target workload is detected to meet a preset condition;
checking, for each container image in the target workload, a type of architecture supported by the container image if a dispatch gate is detected to be present on the target workload;
and under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, adding a node affinity tag for the target workload based on the compatible architecture type, and removing a scheduling gate for the target workload, so that a master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
2. The method of claim 1, wherein the preset conditions include: the namespaces of the target workload belong to target namespaces and/or the label content of the preset labels of the target workload belong to target label content.
3. The method of claim 1, wherein the checking the architecture type supported by the container image comprises:
obtaining a history checking result, wherein the history checking result comprises a history container image subjected to history checking and a framework type supported by the history container image;
retrieving a historical container image that is the same as the container image in the historical inspection results;
and under the condition that the historical container image which is the same as the container image is retrieved, acquiring the architecture type supported by the historical container image which is the same as the container image, and taking the architecture type as the architecture type supported by the container image.
4. A method according to claim 3, wherein said retrieving the same historical container image as said container image in said historical inspection results comprises:
retrieving a historical container mirror image which is the same as the mirror image registry of the container mirror image from the historical checking result to obtain a first retrieval result;
Under the condition that the first search result is not empty, searching a historical container mirror image which is the same as the mirror image warehouse of the container mirror image in the first search result to obtain a second search result;
searching a historical container mirror image which is the same as the mirror image label of the container mirror image in the second search result under the condition that the second search result is not empty to obtain a third search result;
under the condition that the third search result is not empty, acquiring the architecture type supported by the historical container image in the third search result, and taking the architecture type as the architecture type supported by the container image;
if the first search result, the second search result, or the third search result is empty, it is determined that the same history container image as the container image is not searched.
5. The method of claim 3, wherein the checking the architecture type supported by the container image further comprises:
constructing a remote image repository access credential based on an access key of the container image if the same historical container image as the container image is not retrieved;
entering a remote mirror repository based on the remote mirror repository access credential and retrieving in the remote mirror repository the architecture type supported by the container image.
6. The method of claim 5, further comprising, after said retrieving in said remote image repository the architecture types supported by said container image:
the container image and its supported architecture types are saved in the history check result.
7. The method of any of claims 1-6, wherein a scheduling gate is removed for the target workload if no type of architecture supported by at least one of the container images is checked or if a compatible type of architecture for the target workload is not determined, such that the master scheduler schedules the target workload if no scheduling gate is detected on the target workload.
8. A workload scheduler, comprising:
the first adding module is used for adding a scheduling gate for the target workload when the target workload is detected to meet the preset condition in the creating process of the target workload;
a first checking module, configured to check, for each container image in the target workload, a type of architecture supported by the container image if it is detected that a dispatch gate exists on the target workload;
And the second adding module is used for adding a node affinity tag for the target workload based on the compatible architecture type under the condition that the compatible architecture type is determined based on the architecture type supported by each container image in the target workload, and removing a scheduling gate for the target workload, so that a master scheduler schedules the target workload based on the node affinity tag under the condition that the master scheduler monitors that the scheduling gate does not exist on the target workload.
9. An electronic device, comprising:
a processor and a memory, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium has stored therein a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
CN202311650829.1A 2023-12-05 2023-12-05 Workload scheduling method, device, equipment and storage medium Active CN117349035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311650829.1A CN117349035B (en) 2023-12-05 2023-12-05 Workload scheduling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311650829.1A CN117349035B (en) 2023-12-05 2023-12-05 Workload scheduling method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117349035A true CN117349035A (en) 2024-01-05
CN117349035B CN117349035B (en) 2024-03-15

Family

ID=89357961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311650829.1A Active CN117349035B (en) 2023-12-05 2023-12-05 Workload scheduling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117349035B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118381822A (en) * 2024-06-27 2024-07-23 腾讯科技(深圳)有限公司 Service migration method, device, system, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110545326A (en) * 2019-09-10 2019-12-06 杭州数梦工场科技有限公司 Cluster load scheduling method and device, electronic equipment and storage medium
CN110795250A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Load scheduling method, device, equipment and storage medium
CN112004268A (en) * 2020-10-26 2020-11-27 新华三技术有限公司 Resource scheduling method and device
CN112667373A (en) * 2020-12-17 2021-04-16 北京紫光展锐通信技术有限公司 Task scheduling method, device and system based on heterogeneous CPU architecture and storage medium
US20210224106A1 (en) * 2020-01-22 2021-07-22 Salesforce.Com, Inc. Load balancing through autonomous organization migration
CN113645300A (en) * 2021-08-10 2021-11-12 上海道客网络科技有限公司 Node intelligent scheduling method and system based on Kubernetes cluster
CN113946415A (en) * 2018-03-16 2022-01-18 华为技术有限公司 Scheduling method and device and main node
US20220028185A1 (en) * 2020-07-27 2022-01-27 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for improving task offload scheduling in an edge-computing environment
CN114706658A (en) * 2022-03-29 2022-07-05 浪潮云信息技术股份公司 Container mirror image data processing method, device, equipment and medium
US20230037293A1 (en) * 2020-09-01 2023-02-09 Huawei Cloud Computing Technologies Co., Ltd. Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
CN116233070A (en) * 2023-03-20 2023-06-06 北京奇艺世纪科技有限公司 Distribution system and distribution method for static IP addresses of clusters
CN116257363A (en) * 2023-05-12 2023-06-13 中国科学技术大学先进技术研究院 Resource scheduling method, device, equipment and storage medium
CN116541134A (en) * 2023-07-05 2023-08-04 苏州浪潮智能科技有限公司 Method and device for deploying containers in multi-architecture cluster

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113946415A (en) * 2018-03-16 2022-01-18 华为技术有限公司 Scheduling method and device and main node
CN110545326A (en) * 2019-09-10 2019-12-06 杭州数梦工场科技有限公司 Cluster load scheduling method and device, electronic equipment and storage medium
CN110795250A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Load scheduling method, device, equipment and storage medium
US20210224106A1 (en) * 2020-01-22 2021-07-22 Salesforce.Com, Inc. Load balancing through autonomous organization migration
US20220028185A1 (en) * 2020-07-27 2022-01-27 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for improving task offload scheduling in an edge-computing environment
US20230037293A1 (en) * 2020-09-01 2023-02-09 Huawei Cloud Computing Technologies Co., Ltd. Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
CN112004268A (en) * 2020-10-26 2020-11-27 新华三技术有限公司 Resource scheduling method and device
CN112667373A (en) * 2020-12-17 2021-04-16 北京紫光展锐通信技术有限公司 Task scheduling method, device and system based on heterogeneous CPU architecture and storage medium
CN113645300A (en) * 2021-08-10 2021-11-12 上海道客网络科技有限公司 Node intelligent scheduling method and system based on Kubernetes cluster
CN114706658A (en) * 2022-03-29 2022-07-05 浪潮云信息技术股份公司 Container mirror image data processing method, device, equipment and medium
CN116233070A (en) * 2023-03-20 2023-06-06 北京奇艺世纪科技有限公司 Distribution system and distribution method for static IP addresses of clusters
CN116257363A (en) * 2023-05-12 2023-06-13 中国科学技术大学先进技术研究院 Resource scheduling method, device, equipment and storage medium
CN116541134A (en) * 2023-07-05 2023-08-04 苏州浪潮智能科技有限公司 Method and device for deploying containers in multi-architecture cluster

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118381822A (en) * 2024-06-27 2024-07-23 腾讯科技(深圳)有限公司 Service migration method, device, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117349035B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US20230161647A1 (en) Extending the kubernetes api in-process
US20130283259A1 (en) Application installation
CN112965761B (en) Data processing method, system, electronic equipment and storage medium
CN107291481B (en) Component updating method, device and system
CN111597065B (en) Method and device for collecting equipment information
CN113835992B (en) Memory leakage processing method and device, electronic equipment and computer storage medium
CN113553178A (en) Task processing method and device and electronic equipment
CN117349035B (en) Workload scheduling method, device, equipment and storage medium
CN112612489B (en) Method and device for constructing upgrade package of software and electronic equipment
CN113886353B (en) Data configuration recommendation method and device for hierarchical storage management software and storage medium
CN112181724B (en) Big data disaster recovery method and device and electronic equipment
CN114328097A (en) File monitoring method and device, electronic equipment and storage medium
CN110888773B (en) Method, device, medium and electronic equipment for acquiring thread identification
US20230418681A1 (en) Intelligent layer derived deployment of containers
CN116679930A (en) Front-end project construction method and device, electronic equipment and storage medium
CN117369952B (en) Cluster processing method, device, equipment and storage medium
CN111309367B (en) Method, device, medium and electronic equipment for managing service discovery
CN115373757A (en) Solving method and device for cluster monitoring data loss in Promethues fragmentation mode
CN110489341B (en) Test method and device, storage medium and electronic equipment
CN116263824A (en) Resource access method and device, storage medium and electronic equipment
CN117389690B (en) Mirror image package construction method, device, equipment and storage medium
CN112052128B (en) Disaster recovery method and device and electronic equipment
CN112749042B (en) Application running method and device
CN113377489A (en) Construction and operation method and device of remote sensing intelligent monitoring application based on cloud platform
CN114398233B (en) Load abnormality detection method and device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant