WO2023198061A1 - 一种容器调度方法、电子设备和存储介质 - Google Patents

一种容器调度方法、电子设备和存储介质 Download PDF

Info

Publication number
WO2023198061A1
WO2023198061A1 PCT/CN2023/087625 CN2023087625W WO2023198061A1 WO 2023198061 A1 WO2023198061 A1 WO 2023198061A1 CN 2023087625 W CN2023087625 W CN 2023087625W WO 2023198061 A1 WO2023198061 A1 WO 2023198061A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
container
resource
node
scheduling
Prior art date
Application number
PCT/CN2023/087625
Other languages
English (en)
French (fr)
Inventor
屠要峰
刘子捷
牛家浩
高洪
张登银
王德政
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023198061A1 publication Critical patent/WO2023198061A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of computer application technology, for example, to a container scheduling method, electronic equipment and storage media.
  • Container deployment solutions in related technologies create corresponding container runs for each task in big data and artificial intelligence jobs. Since big data and artificial intelligence jobs often contain multiple tasks, one big data and artificial intelligence job often Contains multiple containers. However, the batch scheduling efficiency of business jobs in related technologies is still low.
  • the embodiment of the present application proposes a container scheduling method, electronic device, and storage medium, which can improve container scheduling efficiency and reduce business job waiting time while realizing batch scheduling of business jobs.
  • An embodiment of the present application also provides a container scheduling method, wherein the method includes:
  • An embodiment of the present application also provides an electronic device, wherein the electronic device includes:
  • processors one or more processors
  • Memory used to store one or more programs
  • the one or more processors are caused to implement any of the methods described in the embodiments of this application.
  • Embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores one or more programs, and the one or more programs are executed by the one or more processors to Implement the method described in any one of the embodiments of this application.
  • the fitness and carrying capacity of the task containers in the cluster nodes for the respective defined description resource objects are determined.
  • the ability to configure the scheduling relationship between task containers and cluster nodes can improve the efficiency of container scheduling while realizing batch scheduling of task containers.
  • Allocating task containers to different cluster nodes based on fitness and carrying capacity can improve the relationship between task containers and clusters. Node matching degree alleviates the problem of cluster node resource competition.
  • Figure 1 is a schematic diagram of the working mode of a container pipeline scheduling method
  • Figure 2 is a schematic diagram of the working mode of a container batch scheduling method
  • Figure 3 is a schematic diagram of the composition of a big data or artificial intelligence operation
  • Figure 4 is a schematic diagram of the resource competition problem in container batch scheduling
  • Figure 5 is a flow chart of a container scheduling method provided by an embodiment of the present application.
  • Figure 6 is a flow chart of a container scheduling method provided by an embodiment of the present application.
  • Figure 7 is a flow chart of another container scheduling method provided by an embodiment of the present application.
  • Figure 8 is a flow chart of another container scheduling method provided by an embodiment of the present application.
  • Figure 9 is a schematic structural diagram of a container scheduling device provided by an embodiment of the present application.
  • Figure 10 is a schematic scenario diagram of a container scheduling method provided by an embodiment of the present application.
  • Figure 11 is an example diagram of a container scheduling device provided by an embodiment of the present application.
  • Figure 12 is a flow example diagram of a container scheduling method provided by an embodiment of the present application.
  • Figure 13 is an example flow chart of job division provided by the embodiment of the present application.
  • Figure 14 is a flow example diagram of a job type verification provided by the embodiment of the present application.
  • Figure 15 is an example flowchart of scheduling job sorting provided by the embodiment of the present application.
  • Figure 16 is a flow example diagram of node filtering and fitness calculation provided by the embodiment of the present application.
  • Figure 17 is an example flow chart of node carrying capacity calculation provided by the embodiment of the present application.
  • Figure 18 is an example flow chart of scheduling node selection provided by the embodiment of the present application.
  • Figure 19 is an example diagram of a node binding process provided by an embodiment of the present application.
  • Figure 20 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Pipeline container scheduling is used in processing big data and There is a flaw of low scheduling efficiency in artificial intelligence operations.
  • researchers have proposed a batch scheduling method as shown in Figure 2.
  • the batch scheduling method is to schedule big data or artificial intelligence jobs containing different types of tasks in each scheduling cycle. unit for scheduling.
  • big data or artificial intelligence jobs often include different tasks.
  • the batch scheduling method in related technologies needs to traverse the resource requirements of all tasks in a big data or artificial intelligence job when making scheduling decisions, resulting in business problems in related technologies. Batch scheduling of jobs is still inefficient.
  • the batch scheduling method represented by Volcano schedules all tasks included in a big data and artificial intelligence job to the same node as much as possible. Since these tasks may require the same type of resources, this batch scheduling method is easy to Leading to the problem of node resource competition, as shown in Figure 4, currently, the batch scheduling method in related technologies has only improved the scheduling efficiency of big data and artificial intelligence jobs to a certain extent, but there are still problems of low scheduling efficiency and node resource competition. question.
  • embodiments of the present invention provide a container scheduling method to improve the scheduling efficiency of big data and artificial intelligence jobs and alleviate the problem of node resource competition.
  • Figure 5 is a flow chart of a container scheduling method provided by an embodiment of the present application.
  • the embodiment of the present application can be applied to the situation of big data or artificial intelligence job scheduling.
  • the method can be executed by a container scheduling device, and the device can be implemented through software. and/or hardware method implementation, see Figure 5.
  • the method provided by the embodiment of the present application specifically includes the following steps:
  • Step 110 Divide the task container corresponding to the business job into at least one custom description resource object according to the task type.
  • the task type can be the business type corresponding to the specific business of big data or artificial intelligence, and can represent the functions that need to be implemented by the job or the data that needs to be transmitted, etc.
  • the business job can be the transaction to be executed by big data or artificial intelligence, and the business job can be
  • One or more task containers can be an environment for processing business.
  • a resource object can include one or more task containers.
  • the custom description resource object can be a resource object customized by the user according to needs. Task containers managed in custom description resources can correspond to the same task type.
  • the corresponding task type is determined for each business operation.
  • the business operations can be divided into different custom description resource objects according to the task type.
  • the custom description resource object can be configured between business types.
  • business jobs can be divided according to the correspondence between business types and custom description resource objects.
  • Step 120 Determine the adaptability and carrying capacity of each cluster node to the task container in the custom description resource object.
  • the cluster node can be a processing node that processes business jobs
  • the number of cluster nodes can be one or more
  • different cluster nodes can be located at the same location or different locations.
  • Fitness can be the matching degree of cluster nodes to business jobs in different custom description resource objects. The matching degree can be determined by factors such as resources and processing performance.
  • Carrying capacity can be the ability of cluster nodes to accommodate business jobs. Carrying capacity can be determined by self-defined resource objects. The definition describes the resource consumption of the resource object and the determination of the remaining resources of the cluster node.
  • the adaptability of each cluster node to the task containers in different custom description resource objects can be determined, as well as the carrying capacity of each cluster node to accommodate the task containers in different custom description resource objects.
  • Step 130 Configure the scheduling relationship between cluster nodes and task containers based on fitness and carrying capacity.
  • the scheduling relationship can be a configuration relationship between a task container and a cluster node, and a cluster node with a scheduling relationship can process the corresponding task container.
  • the cluster node that best matches the task container can be determined for each task container according to its corresponding fitness and carrying capacity, and a scheduling relationship between the task container and the cluster node can be established.
  • the fitness and carrying capacity of the task containers in the cluster nodes for the respective defined description resource objects are determined.
  • the ability to configure the scheduling relationship between task containers and cluster nodes can improve the efficiency of container scheduling while realizing batch scheduling of task containers.
  • Allocating task containers to different cluster nodes based on fitness and carrying capacity can improve the relationship between task containers and clusters. Node matching degree alleviates the problem of cluster node resource competition.
  • Figure 6 is a flow chart of a container scheduling method provided by an embodiment of the present application.
  • the embodiment of the present application is a concretization based on the above-mentioned embodiment of the application.
  • the method provided by the embodiment of the present application specifically includes the following steps:
  • Step 210 Create task containers corresponding to business jobs and tasks, where the task containers include image names, container startup Command, container startup parameters, and task type labels.
  • a business job may include multiple tasks, and the task may be a pending transaction of a cluster node.
  • a corresponding task container may be created for the tasks it contains, where each task container may There are image names, container startup commands, container startup parameters, and task type labels.
  • the container creation module of the Kubernetes API-Server creates a corresponding Kubernetes Pod for each task in the job.
  • Step 220 Create a custom description resource object corresponding to the business job, where the custom description resource object includes a name, a job type label, and a scheduling priority label.
  • Step 230 Divide each task container into custom description resource objects with matching job type labels according to the task type labels of the task containers.
  • the task container can be divided into different custom description resource objects according to its configured task type label.
  • the specific task type label of the custom description resource object can match the task type label of the task container.
  • the matching can include task types indicating the same or related.
  • Step 240 Filter custom description resource objects according to resource object types.
  • the type of the custom description resource object can be extracted. If the type of the extracted custom description resource object is the same as the type of the set resource object, then the custom description resource object will be further processed. If the type of the extracted custom description resource object is the same as the type of the set resource object. If the types of resource objects set are different, the custom description resource object will not be processed.
  • Step 250 Sort the respective definition description resource objects according to the task queue.
  • the task queue can be a storage space for temporarily storing task containers, and the custom description resource objects in the task queue can adopt a first-in, first-out output decision.
  • the respective definition description resource objects can be input into the task queue, and the respective definition description resource objects can be sorted in the task queue to facilitate subsequent task container scheduling.
  • the number of task queues can be one or more
  • custom description resource objects of the same resource object type can be stored in the same task queue, and different custom description resource objects can be stored in different task queues.
  • Step 260 Determine the adaptability and carrying capacity of each cluster node to the task container in the custom description resource object.
  • Step 270 Configure the scheduling relationship between the cluster nodes and the task containers according to the fitness and carrying capacity.
  • sorting each of the custom description resource objects according to the task queue includes at least one of the following: sorting each of the custom description resource objects in the task queue according to the order of joining the queue. Sorting: Sort each of the custom description resource objects in the task queue according to the scheduling priority.
  • the order of joining the queue can be the time order in which the custom description resource objects arrive in the task queue, and the scheduling priority can be the order in which the custom description resource objects are scheduled.
  • the respective defined description resource objects can be sorted in the task queue according to the order in which they arrive in the task queue.
  • the custom description resource objects that arrive in the task queue first can be output first, and can also include custom description resources.
  • Objects are sorted in the task queue according to their configured scheduling priority, and the task container with a higher scheduling priority in the custom description resource object is output first.
  • Custom description resource objects can also be sorted based on the order in which they are added to the queue and the scheduling priority. For example, in the task queue, the customized description resource objects are arranged in order from high to low according to the scheduling priority. For multiple self-defined description resource objects with the same scheduling priority, To define description resource objects, you can sort them in the task queue according to the order in which the above-mentioned custom description resource objects are enqueued.
  • Figure 7 is a flow chart of another container scheduling method provided by the embodiment of the present application.
  • the embodiment of the present application is a concrete implementation based on the above-mentioned embodiment of the present application. Referring to Figure 7, the method provided by the embodiment of the present application specifically includes the following: step:
  • Step 310 Divide the task container corresponding to the business job into at least one custom description resource object according to the task type.
  • Step 320 Obtain the target resource object from the respective definition description resource object, and obtain the resource information of each cluster node.
  • the target resource object may be the currently processed resource object among multiple custom description resource objects, and the target resource object may be obtained randomly or selected from the task queue.
  • the resource object currently to be processed in each custom description resource object can be recorded as the target resource object.
  • the resource information of each cluster node can be extracted. This resource information can reflect the current performance status of different cluster nodes.
  • the resource information may include information such as the resource type, total amount of resources, and remaining amount of resources of the cluster node.
  • Step 330 Eliminate cluster nodes whose status information contains taint labels from each cluster node; remove cluster nodes whose remaining resources are less than the resource requirements from each cluster node.
  • the taint label can be key-value attribute data defined on the cluster node.
  • the cluster node refuses to schedule the task container.
  • the resource requirement may represent the amount of resources required by the task container in the custom description resource object.
  • Each cluster node can be filtered through status information.
  • the screening process can include extracting the status information of each cluster node. If If the obtained status information contains a taint label, and the cluster node corresponding to the status information refuses to schedule the task container, the cluster node can be removed. You can also use resource information to filter cluster nodes.
  • the specific filtering process can include extracting the remaining resources in the resource information of each cluster node. If the remaining resources are less than the total resources required by the task container in the custom description resource object, then determine If a cluster node cannot accommodate task containers for scheduling, the cluster node can be removed.
  • Step 340 Extract the first task container from the target resource object and record it as a sample container.
  • the example container can be the baseline task container used to measure the container resource status in the target resource object.
  • the example container can be the first task container added to the target resource object.
  • the selection of the example container is not limited to selecting the first task container. You can also randomly select task containers in the target resource object or build a virtual sample container based on the average amount of resources of all task containers in the target resource object.
  • the first task container can be selected as a sample container in the custom description resource object recorded as the target resource object.
  • Step 350 Extract the resource requirements of the example container.
  • the container list of the sample container can be extracted, and the resource requirements of different resources of each container object can be determined according to the container list.
  • the capacity list of the sample container can be extracted to determine the resource requirements of different container objects for the central processing unit (Central Processing Unit, CPU), graphics processing
  • the demand for resources such as graphics processor (Graphics Processing Unit, GPU), memory, disk and network bandwidth, etc. can be taken as the sum of the same resources as the resource demand.
  • Step 360 Determine the fitness based on the matching degree between the resource demand and the remaining resource in the resource information.
  • the remaining amount of resources can be the remaining unoccupied values of different types of resources in each cluster node, such as the remaining CPU amount, remaining GPU amount, remaining memory, remaining disk, and remaining network bandwidth of the cluster node.
  • the matching degree can be determined based on the resource demand of the task container and the remaining resource amount of the cluster node.
  • the ratio of the resource demand and the remaining resource amount can be used as the matching degree.
  • the ratio is less than 1. In this case, the closer it is to 1, the more matching it is.
  • Step 370 Determine the number of task containers that the cluster node can accommodate as the carrying capacity according to the resource demand and the remaining amount of resources in the resource information.
  • the accommodation quantity may be the number of sample containers accommodated in the cluster node, and the accommodation quantity may be determined by the quotient of the remaining resource amount and the resource demand amount.
  • the corresponding resource remaining amount can be extracted for each cluster node, and the quotient of the resource remaining amount and the resource demand of the example container can be used as the accommodation quantity.
  • the accommodation quantity can be the ratio of the cluster node to the task container. carrying capacity.
  • Step 380 Configure the scheduling relationship between the cluster nodes and the task containers according to the fitness and carrying capacity.
  • determining the fitness based on the matching degree between the resource demand and the remaining resource in the resource information includes:
  • the matching degree between the resource demand and the remaining resource is determined according to the LeastRequestedPriority policy and the BalancedResourceAllocation policy.
  • the matching degree of each cluster node and the sample container can be determined according to the LeastRequestedPriority policy and the BalancedResourceAllocation policy.
  • the LeastRequestedPriority policy can include determining the ratio of the resource demand to the total resource amount.
  • the matching degree ( The remaining amount of resources - the amount of resources required) / the total amount of resources. The higher the value of this matching degree, the higher the adaptability of the cluster node to the task container in the target resource object.
  • the BalancedResourceAllocation strategy can include that the smaller the variance in the CPU and memory usage of each cluster node, the higher the node weight. After the LeastRequestedPriority strategy determines the matching degree, the weight can be used to determine the matching degree according to the CPU and memory usage of each cluster node. Adjustment.
  • determining the number of cluster nodes to accommodate the task containers according to the resource demand and the remaining amount of resources in the resource information includes:
  • the resource requirements of different resources in the example container can be extracted, for example, the requirements of CPU, GPU, memory, disk and network bandwidth.
  • the remaining resources of the above types of resources are also extracted for the cluster nodes.
  • the quotient of the remaining resource and the resource demand can be used as the resource accommodation number of the corresponding resource.
  • the resource accommodation number with the smallest value can be used as the resource accommodation number of the cluster node and the resource demand. For example container capacity.
  • Figure 8 is a flow chart of another container scheduling method provided by the embodiment of the present application.
  • the embodiment of the present application is a concrete implementation based on the above-mentioned embodiment of the application. Referring to Figure 8, the method provided by the embodiment of the present application specifically includes the following: step:
  • Step 410 Divide the task container corresponding to the business job into at least one custom description resource object according to the task type.
  • Step 420 Determine the adaptability and carrying capacity of each cluster node to the task container in the custom description resource object.
  • Step 430 Sort each cluster node according to the fitness value from large to small for the same task container.
  • the fitness of pairs of the same task container can be sorted in order from large to small. According to the corresponding relationship between the degree and the cluster node, each cluster node will be arranged according to this sorting. It can be understood that each task container can have a sorting sequence composed of its own corresponding cluster node.
  • Step 440 Use the top 20 percent of cluster nodes in the cluster node ranking as candidate nodes.
  • the top 20 percent of cluster nodes can be selected as candidate nodes in the ranking of cluster nodes, and the candidate nodes can be used for task container scheduling.
  • Step 450 Determine the quotient of the total number of containers of the task container and the total number of nodes of the candidate nodes.
  • the total number of nodes may be the total number of nodes selected as candidate nodes in the cluster, and the total number of containers may be the total number of task containers included in the custom resource description object.
  • the quotient of the total number of containers and the total number of nodes can be calculated.
  • Step 460 Allocate task containers to each candidate node according to quotient value and carrying capacity.
  • candidate nodes can be assigned to task containers based on the quotient and carrying capacity determined in the above process, and task containers with custom descriptions of resource objects can be assigned to candidate nodes. For example, a number of task containers corresponding to the quotient can be allocated to each candidate node, and the number of task containers allocated to each candidate node shall not exceed the carrying capacity.
  • Step 470 Establish a scheduling relationship between each task container and its corresponding candidate node.
  • the identification information of the candidate node to which it is assigned can be stored in the task container, and the identification information can be used to indicate the scheduling relationship between the task container and the candidate node.
  • a node name field can be set in the task container, and the node name field can be used to store The node name of the candidate node to which the task container is assigned.
  • the task container is allocated to each candidate node according to the quotient value and the carrying capacity, including:
  • Candidate nodes can be sorted according to their carrying capacity, and one candidate node is selected in sequence for judgment to determine whether the carrying capacity of the candidate node is greater than the quotient of the total number of nodes and the total number of containers. If it is greater than or equal to the total number of containers, it is regarded as the current candidate.
  • the node configures the number of task containers with the quotient value. If it is less than the quotient value, the number of task containers with the carrying capacity will be configured for the current candidate node.
  • the top 20% of the cluster nodes in the ranking of the cluster nodes will be used as new candidate nodes, and The unallocated task containers are allocated to the new candidate nodes.
  • each task container can be judged to determine whether the task containers have been assigned to the candidate nodes. For example, it is judged whether the node name field in the task container is empty. If so, it is determined that the task container has been assigned to the candidate node. Allocation. In this case, the currently selected candidate node cannot complete the requirement of scheduling all task containers. You can use the currently selected candidate node as the starting point in the cluster node sorting and re-select the 2% behind the starting point in the cluster node sorting.
  • the ten cluster nodes serve as new candidate nodes, and unassigned task containers can be allocated to the new candidate nodes according to the above allocation method.
  • each task container and its corresponding candidate node.
  • the binding of the scheduling relationship may be to set the node name field of the task container to the identification information of the candidate node to which the task container belongs.
  • the identification information may include the network address, node name or node number of the candidate node, etc. .
  • Figure 9 is a schematic structural diagram of a container scheduling device provided by an embodiment of the present application. It can execute the container scheduling method provided by any embodiment of the present application and has functional modules and beneficial effects corresponding to the execution method.
  • the device can be implemented by software and/or hardware. Referring to Figure 9, the device provided by the embodiment of the present application specifically includes the following:
  • the task division module 501 is used to divide the task container corresponding to the business job into at least one custom description resource object according to the task type.
  • the cluster parameter module 502 is used to determine the adaptability and carrying capacity of each cluster node to the task container in the custom description resource object.
  • the scheduling setting module 503 is configured to configure the scheduling relationship between the cluster node and the task container according to the fitness and the carrying capacity.
  • the task division module divides the task containers into different custom description resource objects according to the task types of the business operations
  • the cluster parameter module determines the fitness and carrying capacity of the cluster nodes for the task containers in the respective definition description resource objects.
  • the scheduling setting module configures the scheduling relationship between task containers and cluster nodes according to fitness and carrying capacity. While realizing batch scheduling of task containers, it can improve container scheduling efficiency and allocate task containers to different nodes based on fitness and carrying capacity. Cluster nodes can improve the matching between task containers and cluster nodes and alleviate the problem of cluster node resource competition.
  • the device also includes:
  • the queue processing module is used to sort each of the custom description resource objects according to the task queue.
  • the device also includes:
  • a resource object filtering module is used to filter the custom description resource objects according to resource object types.
  • the task division module 501 includes:
  • a container creation unit is configured to create the task containers corresponding to the tasks included in the business job, wherein the task container includes an image name, a container startup command, a container startup parameter, and a task type label.
  • the resource object unit is used to create a custom description resource object corresponding to the business job, where the custom description resource object includes a name, a job type label, and a scheduling priority label.
  • a division execution unit is configured to divide each task container into the custom description resource object with a matching job type tag according to the task type tag of the task container.
  • the cluster parameter module 502 includes:
  • An example determining unit is configured to extract the first task container from the target resource object and record it as an example container.
  • a resource extraction unit is used to extract the resource requirements of the example container.
  • a fitness unit configured to determine the fitness based on the matching degree between the resource demand and the remaining resource in the resource information.
  • a carrying capacity unit configured to determine the number of cluster nodes that can accommodate the task containers as the carrying capacity according to the resource demand and the remaining amount of resources in the resource information.
  • the cluster parameter module 502 also includes:
  • a cluster screening unit is configured to remove cluster nodes whose status information contains taint labels from each cluster node; and remove cluster nodes whose remaining resources are less than the resource requirements from each cluster node.
  • the fitness unit is specifically used to determine the matching degree between the resource demand and the remaining resource according to the LeastRequestedPriority policy and the BalancedResourceAllocation policy.
  • the carrying capacity unit is specifically used to: extract the resource requirements of various types of resources in the example container and the resource remaining amounts of various types of resources in the resource information;
  • the various types of resources determine the quotient between the resource remaining amount and the resource demand, and the minimum value among the quotients is used as the accommodation quantity.
  • the scheduling setting module 503 includes:
  • a sorting unit configured to sort the cluster nodes according to the fitness value from large to small for the same task container.
  • a candidate selection unit is configured to select the top 20 percent of the cluster nodes in the ranking of the cluster nodes as candidate nodes.
  • a quotient determining unit is used to determine the quotient of the total number of containers of the task container and the total number of nodes of the candidate nodes.
  • a task allocation unit configured to allocate the task container to each of the candidate nodes according to the quotient value and the carrying capacity.
  • a relationship establishing unit is used to establish a scheduling relationship between each of the task containers and the corresponding candidate nodes.
  • the task allocation unit is specifically configured to: determine whether the carrying capacity of the candidate nodes is greater than or equal to the quotient in order from high to low according to the fitness of each candidate node. value; if yes, configure the candidate node with the number of task containers with the quotient value; if not, configure the candidate node with the number of task containers with the carrying capacity.
  • the scheduling setting module 503 also includes:
  • An exception processing unit is used to determine that there is a task container that is not assigned to the candidate node, and then use the top 20 percent of the cluster nodes in the ranking of the cluster nodes as new candidate nodes, and assign the unallocated The task container is assigned to the new candidate node.
  • the relationship establishment unit is specifically configured to set the node name field of the task container to the identification information corresponding to the candidate node.
  • the queue processing module is specifically used to: sort the custom description resource objects in the task queue according to the order of joining the queue; sort the custom description resource objects in the task queue according to the scheduling priority. Each of the custom description resource objects is sorted.
  • the Kubernetes cluster consists of a Kubernetes Master node and several Kubernetes Node nodes.
  • the Kubernetes Master node is responsible for scheduling the jobs to be scheduled.
  • the Kubernetes Node is responsible for deploying and running tasks to be scheduled based on the binding results of the Kubernetes Master node.
  • the container scheduling device provided by the embodiment of this application is composed of six components, including: the container creation module of Kubernetes API-Server, the event listening module of Kubernetes API-Server, Kubernetes Etcd, and the Kubernetes-based batch scheduler. Based on the container scheduling queue in the Kubernetes batch scheduler and the scheduling binding module of the Kubernetes API-Server.
  • the container creation module of Kubernetes API-Server is used to create corresponding containers for each task in big data or artificial intelligence jobs and divide the tasks into different to-be-scheduled jobs according to task type labels;
  • Kubernetes API-Server's The event listening module is used to monitor the creation events of the container creation module of Kubernetes API-Server and verify the type of the job to be scheduled;
  • Kubernetes Etcd is used to store the status information and cluster information of each component running in the batch scheduling device proposed in this application;
  • the status information of each component running in the batch scheduling device includes the time point of component operation error and the error event log;
  • the cluster information includes each component of the cluster.
  • the container scheduling queue in the Kubernetes-based batch scheduler is used to sort the scheduling order according to the scheduling priority of the jobs to be scheduled and the time point at which they are added to the queue;
  • the Kubernetes-based batch scheduler is used to The jobs to be scheduled that pop up from the container scheduling queue are scheduled;
  • the batch scheduler based on Kubernetes includes a filtering module, a fitness calculation module, a carrying capacity calculation module and a scheduling node selection module;
  • the filtering module is used to filter out those that do not meet the status verification and resource verification
  • the fitness calculation module is used to calculate the fitness of the nodes that pass the filtering module;
  • the carrying capacity calculation module is used to calculate the carrying capacity of the nodes that pass the filtering module;
  • the scheduling node selection module is used to calculate the carrying capacity of the nodes based on the fitness and The carrying capacity value selects the most appropriate scheduling node for each task in the job to be scheduled;
  • Figure 12 is a flow example diagram of a container scheduling method provided by an embodiment of this application.
  • the method provided by this embodiment of this application includes the following seven steps: job division, job type verification, sorting of jobs to be scheduled, node Filtering and fitness calculation, node carrying capacity calculation, scheduling node selection and node binding.
  • the container creation module of Kubernetes API-Server creates corresponding containers based on job configuration requests submitted by users and divides the tasks in the job into different to-be-scheduled jobs based on task type labels.
  • the event listening module of Kubernetes API-Server verifies the type of job to be scheduled.
  • the container scheduling queue in the Kubernetes-based batch scheduler prioritizes the jobs to be scheduled that pass the job type test and pops the jobs to be scheduled from the head of the queue for scheduling.
  • the filtering module and fitness calculation module of the Kubernetes batch scheduler perform node filtering and fitness calculation respectively.
  • the carrying capacity calculation module of the Kubernetes batch scheduler calculates the carrying capacity of the node.
  • the scheduling node selection module of the Kubernetes-based batch scheduler selects the most appropriate scheduling node for each task in the job to be scheduled based on the node's fitness and carrying capacity.
  • the scheduling binding module of Kubernetes API-Server binds each task of the job to be scheduled with its scheduling node.
  • the container creation module of Kubernetes API-Server creates a corresponding container for each task in a big data or artificial intelligence job and divides the tasks into different to-be-scheduled jobs based on task type labels, as shown in Figure 13.
  • the task types in each to-be-scheduled job finally divided are the same.
  • This step may include the following steps:
  • the container creation module of Kubernetes API-Server creates a corresponding Kubernetes Pod (hereinafter referred to as Pod) for each task in the job.
  • the created Pod object includes the image name of the container in the Pod, the container startup command, the container startup parameters, and the task type label corresponding to the Pod.
  • the container creation module of Kubernetes API-Server creates a Kubernetes CRD (hereinafter referred to as CRD) resource object for the job that describes the job to be scheduled.
  • CRD Kubernetes CRD
  • the created CRD resource object includes the name of the CRD resource object, the job type label corresponding to the CRD resource object, and the scheduling priority label of the CRD resource object.
  • the container creation module of Kubernetes API-Server divides Pods with the same task type label into the same CRD resource object corresponding to the job to be scheduled.
  • the container creation module of Kubernetes API-Server traverses all Pod objects of big data or artificial intelligence jobs, and divides Pod objects with consistent task type labels into the same CRD resource object corresponding to the job to be scheduled.
  • the job type label of the CRD resource object is consistent with the task type label of all Pod objects it contains.
  • the container creation module of Kubernetes API-Server instantiates the CRD resource object corresponding to the job to be scheduled and stores it in Kubernetes API-Server.
  • Step 2 Job type verification
  • the event listening module of Kubernetes API-Server verifies the type of job to be scheduled, as shown in Figure 14. Only jobs to be scheduled that pass type verification can be sent to the container scheduling queue.
  • This step may include the following steps:
  • the event listening module of Kubernetes API-Server listens to the object creation event in the container creation module of Kubernetes API-Server.
  • the event listening module of Kubernetes API-Server will verify the type of object corresponding to the job to be scheduled in the creation event. If the resource type created by the to-be-scheduled job is not the CRD resource type describing the batch job, the verification will be deemed to have failed.
  • the event listening module of Kubernetes API-Server writes the time point at which the verification ends and the verification error into the Kubernetes event log and stores it in Kubernetes Etcd; otherwise, the verification is deemed to have passed, and the CRD resource object corresponding to the job to be scheduled is Send it to the container scheduling queue in the Kubernetes-based batch scheduler and perform the third step.
  • Step 3 Sorting of jobs to be scheduled
  • the container scheduling queue in the Kubernetes-based batch scheduler schedules them according to the scheduling priority of the jobs defined in the CRD resource object corresponding to the job to be scheduled and the time sequence in which they arrive at the container scheduling queue in the Kubernetes-based batch scheduler. For the ordering, see Figure 15.
  • This step may include the following steps:
  • the container scheduling queue in the Kubernetes-based batch scheduler listens for the joining event of the CRD resource object corresponding to the job to be scheduled.
  • the CRD resource object corresponding to the new to-be-scheduled job will be added to the end of the queue by default.
  • the container scheduling queue in the Kubernetes-based batch scheduler When the container scheduling queue in the Kubernetes-based batch scheduler receives a new CRD resource object corresponding to a job to be scheduled, it will be based on the scheduling priority tags in the CRD resource objects corresponding to all jobs to be scheduled in the queue. Sort them in descending order, that is, the CRD resource object corresponding to the to-be-scheduled job with the highest priority is placed at the head of the queue, and the CRD resource object corresponding to the to-be-scheduled job with the lowest priority is placed at the tail of the queue.
  • the container scheduling queue in the Kubernetes-based batch scheduler pops the CRD resource objects corresponding to the jobs to be scheduled from the head of the queue to perform the fourth step.
  • Step 4 Node filtering and fitness calculation
  • the filtering module of the Kubernetes-based batch scheduler performs primary filtering on the cluster nodes based on the node information in the cluster and the resource information of the CRD resource object corresponding to the job to be scheduled. Subsequently, the fitness calculation module of the Kubernetes-based batch scheduler passes the Filter the primary nodes for node fitness calculation, see Figure 16.
  • This step may include the following steps:
  • the Kubernetes-based batch scheduler monitors the pop-up event of the CRD resource corresponding to the to-be-scheduled job in the container scheduling queue in the Kubernetes-based batch scheduler.
  • the Kubernetes-based batch scheduler initiates a request to Kubernetes Etcd to obtain the information of each node in the current cluster.
  • Status information and resource information The status information of a node mainly indicates whether the node can be scheduled; the resource information of the node refers to the total amount and remaining amount of various resources on the node.
  • Various resources of nodes include CPU, GPU, memory, disk and network bandwidth resources.
  • the filtering module of the Kubernetes batch scheduler performs status verification on the obtained status of each node. If the status of the current node contains NoExecute or NoSchedule taint, it means that the node is set to be unschedulable and the node status verification fails; otherwise, the node passes the status verification.
  • the filtering module of the Kubernetes batch scheduler performs resource verification on nodes that pass status verification.
  • the filtering module of the Kubernetes-based batch scheduler takes the first Pod object from the CRD resource object in the job to be scheduled as a sample Pod object. Subsequently, the filtering module of the Kubernetes-based batch scheduler obtains the various resource requirements of the sample Pod object.
  • the various resources required by the example Pod object include CPU, GPU, memory, disk and network bandwidth resources. For any type of resource, if the resource demand of the example Pod object is greater than the remaining resource of the node, the node resource verification fails; otherwise, the node passes the resource verification.
  • the filtering module of the Kubernetes-based batch scheduler stores nodes that pass status verification and resource verification in the schedulable node list.
  • the fitness calculation module of the Kubernetes-based batch scheduler traverses the nodes in the schedulable node list in sequence.
  • the fitness calculation module of the Kubernetes-based batch scheduler will calculate the various resource requirements of the sample Pod in the CRD resource object of the job to be scheduled and the various types of the node. The remaining resources are used to calculate node fitness.
  • the fitness value of a node is specifically the score of the node. The higher the score of a node, the more suitable it is for the sample Pod object to be deployed on the node. The score of the node is calculated through Kubernetes' native LeastRequestedPriority and BalancedResourceAllocation optimization strategies.
  • the fitness calculation module of the Kubernetes-based batch scheduler stores the fitness value of each node in the schedulable node list and executes the fifth step.
  • Step 5 Node carrying capacity calculation
  • the carrying capacity calculation module of the batch scheduler based on Kubernetes calculates for each node in the schedulable node list the number of Pod objects to be scheduled in the CRD resource object corresponding to the job to be scheduled as the carrying capacity of the node, see Figure 17 shown.
  • This step may include the following steps:
  • the carrying capacity calculation module of the Kubernetes-based batch scheduler takes the Pod object corresponding to the first task from the CRD resource object corresponding to the job to be scheduled as a sample Pod object.
  • the carrying capacity calculation module of the Kubernetes batch scheduler obtains the various resource requirements of the sample Pod object.
  • the various resources required by the example Pod object include CPU, GPU, memory, disk, and network bandwidth resources.
  • the carrying capacity calculation module of the Kubernetes batch scheduler obtains the container list in the sample Pod object and Traverse the container objects in the container list in order to obtain the various resource requirements of each container object.
  • the various resource requirements of each container object are accumulated and summarized to obtain the various resource requirements of the example Pod object.
  • the carrying capacity calculation module of the Kubernetes batch scheduler obtains the remaining amounts of various types of resources on each node.
  • Various resources of nodes include CPU, GPU, memory, disk and network bandwidth resources.
  • the carrying capacity calculation module of the Kubernetes batch scheduler calculates the remaining resource amount of the node and the resource demand of the sample Pod object, and takes the integer part.
  • the smallest value of the quotient among all types of resources is the maximum number of sample Pod objects in the CRD resource object corresponding to the job to be scheduled that the node can accommodate.
  • the carrying capacity calculation module of the Kubernetes-based batch scheduler stores the value with the smallest quotient among all types of resources as the carrying capacity value of the node and executes step six.
  • Step Six Scheduling Node Selection
  • the scheduling node selection module of the Kubernetes-based batch scheduler selects the most appropriate scheduling node for the Pod object corresponding to each task in the CRD resource object to be scheduled based on the fitness of the nodes in the schedulable node list and the node's carrying capacity. See the figure. Shown in 18.
  • This step may include the following steps:
  • the scheduling node selection module of the Kubernetes batch scheduler obtains the fitness and carrying capacity values of each node in the schedulable node list.
  • the fitness of the node is obtained through the node filtering and fitness calculation in the third step, and the carrying capacity of the node is obtained through the node carrying capacity calculation in the fourth step.
  • the scheduling node selection module of the Kubernetes-based batch scheduler selects the most appropriate scheduling node for the Pod object corresponding to each task in the CRD resource object corresponding to the job to be scheduled based on the fitness and carrying capacity values of each node. Among them, nodes with higher fitness values indicate that task scheduling has a higher degree of adaptability at that node, and therefore will be given priority when selecting scheduling nodes.
  • the carrying capacity value indicates the upper limit of the number of tasks to be deployed on the node and determines the number of tasks that can be scheduled on the node. Specifically, the scheduling node selection module of the Kubernetes-based batch scheduler first sorts the nodes in the schedulable node list in descending order according to the node's fitness value.
  • the scheduling node selection module of the Kubernetes batch scheduler selects the top 20% of the schedulable nodes with fitness values in the schedulable node list and stores them as a candidate scheduling node list (the candidate scheduling node list contains at least 1 candidate node ).
  • the scheduling node selection module of the Kubernetes-based batch scheduler compares the number of Pod objects corresponding to tasks contained in the CRD resource object corresponding to the job to be scheduled with the number of nodes in the candidate scheduling node list to determine candidate scheduling nodes.
  • the number of Pod objects corresponding to tasks in the CRD resource object corresponding to the to-be-scheduled job that each candidate scheduling node in the list needs to deploy.
  • each node deploys a Pod object corresponding to a task in the CRD resource object corresponding to the job to be scheduled, until the Pods corresponding to all tasks in the CRD resource object corresponding to the job to be scheduled are deployed. until the scheduling node is selected.
  • the scheduling node selection module of the Kubernetes-based batch scheduler defines a variable named scheduling number, which represents the number of Pod objects corresponding to tasks in the CRD resource objects corresponding to the scheduled jobs deployed by the candidate nodes in the candidate node list. Afterwards, the scheduling node selection module of the Kubernetes-based batch scheduler sets the scheduling number of all nodes participating in the deployment of CRD resource objects corresponding to the tasks to be scheduled to 1; otherwise, the nodes in the candidate node list are deployed according to this quotient value. Schedule CRD resource objects.
  • the node's scheduling quantity value is set to the node's carrying capacity value; otherwise, the node's scheduling quantity value is set to the quotient value.
  • the scheduling node selection module of the Kubernetes batch scheduler writes the location of the scheduling error and the cause of the error into the Kubernetes event log and stores it in Kubernetes Etcd. After all nodes in the candidate node list have completed setting the scheduling quantity values, the scheduling node selection module of the Kubernetes-based batch scheduler stores the scheduling quantity values of all nodes in the candidate scheduling node list.
  • the scheduling node selection module of the Kubernetes batch scheduler accumulates and summarizes the scheduling quantity values of all nodes in the candidate node list. If the sum of the scheduling numbers of all nodes in the candidate node list is less than the number of Pods corresponding to the task in the CRD resource object corresponding to the job to be scheduled, the scheduling node selection module of the Kubernetes-based batch scheduler clears the candidate scheduling node list and selects the node to adapt.
  • the 20% nodes with the next highest degree value are stored as candidate scheduling node lists and steps 4-6 are repeated until the Pod objects corresponding to all tasks in the CRD resource object corresponding to the job to be scheduled are assigned scheduling nodes; if the number of scheduling nodes If the sum is equal to the number of Pods in the CRD resource object to be scheduled, it means that the Pod objects corresponding to all tasks in the CRD resource object corresponding to the job to be scheduled have found the most suitable scheduling node.
  • the scheduling node selection module of the Kubernetes batch scheduler sends the Pod object corresponding to each task in the CRD resource object of the to-be-scheduled job and its corresponding scheduling node as the scheduling result to the scheduling binding module of the Kubernetes API-Server and executes the first Seven steps.
  • the scheduling binding module of Kubernetes API-Server maps each task in the CRD resource object corresponding to the job to be scheduled.
  • the Pod object is bound to its corresponding scheduling node, as shown in Figure 19.
  • This step may include the following steps:
  • the scheduling binding module of Kubernetes API-Server monitors the scheduling result sending events of the Kubernetes batch scheduler.
  • the scheduling binding module of the Kubernetes API-Server parses the Pod objects corresponding to each task in the CRD resource object corresponding to the job to be scheduled from the scheduling results. and its scheduling nodes. If the scheduling binding module of Kubernetes API-Server parses the object incorrectly, the scheduling binding module of Kubernetes API-Server will write the location of the parsing error and the cause of the error into the Kubernetes event log and store it in Kubernetes Etcd.
  • the scheduling binding module of Kubernetes API-Server binds the Pod object corresponding to each task in the CRD resource object corresponding to the scheduled job and its scheduling node. Specifically, the scheduling binding module of Kubernetes API-Server traverses the Pod object corresponding to the task in the CRD resource object corresponding to the job to be scheduled and sets the NodeName field in the Pod object corresponding to the task to be scheduled to the name of its scheduling node, and Asynchronously update the NodeName field of the Pod in the Kubernetes API-Server to the name of the node.
  • the scheduling binding module of the Kubernetes API-Server will write the location of the binding error and the cause of the error into the Kubernetes event The log is stored in Kubernetes Etcd.
  • Figure 20 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device includes a processor 60, a memory 61, an input device 62 and an output device 63; the number of processors 60 in the electronic device may be one or more ,
  • Figure 20 takes a processor 60 as an example; in electronic equipment, the processor 60, memory 61, input device 62 and output device 63 can be connected through a bus or other means.
  • a bus connection is taken as an example.
  • the memory 61 can be used to store software programs, computer executable programs and modules, such as the modules corresponding to the container scheduling device in the embodiment of the present application (task division module 501, cluster parameter module 502 and scheduling settings Module 503).
  • the processor 60 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the memory 61, that is, implementing the above container scheduling method.
  • the memory 61 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system and at least one application program required for a function; the stored data area may store data created according to the use of the electronic device, etc.
  • the memory 61 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • memory 61 may further include memory located remotely relative to processor 60, and these remote memories may be connected to the electronic device through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the input device 62 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device.
  • the output device 63 may include a display device such as a display screen.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor are used to perform a container scheduling method.
  • the method includes:
  • the present application can be implemented with the help of software and necessary general hardware, and of course can also be implemented with hardware. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or that contributes to related technologies.
  • the computer software product can be stored in a computer-readable storage medium, such as a computer floppy disk, Read-Only Memory (ROM), Random Access Memory (RAM), FLASH, hard disk or optical disk, etc., including a number of instructions to make a computer device (which can be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of this application.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may consist of several physical components. Components execute cooperatively. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, a digital signal processor, or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit . Such software may be distributed on a computer-readable medium
  • Computer-readable media may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请实施例提供了一种容器调度方法、电子设备和存储介质。其中,该方法包括:根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象;确定集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力;根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系。

Description

一种容器调度方法、电子设备和存储介质 技术领域
本申请涉及计算机应用技术领域,例如涉及一种容器调度方法、电子设备和存储介质。
背景技术
近年来,大数据和人工智能技术发展飞速,数据挖掘、数据收集、数据处理、数据汇总和深度学习成为当前云数据中心的主流作业类型。这些不同类型的数据作业都需要划分成相互依赖的任务来协同运行。
随着容器虚拟化技术的发展,用户倾向于将任务连同它们的依赖封装到轻量级容器内运行。相关技术中的容器部署方案为大数据和人工智能作业中的每个任务创建各自对应的容器运行,由于大数据和人工智能作业中往往包含多个任务,因此一个大数据和人工智能作业中往往包含多个容器。但相关技术中业务作业的批量调度效率仍然较低。
发明内容
本申请实施例提出一种容器调度方法、电子设备和存储介质,在实现业务作业的批量调度的同时,可提高容器调度效率,降低业务作业等待时间。
本申请实施例还提供了一种容器调度方法,其中,该方法包括:
根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象;
确定集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力;
根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系。
本申请实施例还提供了一种电子设备,其中,该电子设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请实施例中任一所述方法。
本申请实施例还提供了一种计算机可读存储介质,其中,该计算机可读存储介质存储有一个或多个程序,所述一个或多个程序被所述一个或多个处理器执行,以实现如本申请实施例中任一所述方法。
本申请实施例,通过按照业务作业的任务类型划分任务容器到不同的自定义描述资源对象中,确定集群节点针对各自定义描述资源对象的中任务容器的适应度和承载能力,按照适应度和承载能力配置任务容器与集群节点间的调度关系,在实现任务容器的批量调度的同时,可提高容器调度效率,依据适应度和承载能力将任务容器分配到不同的集群节点,可提高任务容器与集群节点匹配度,缓解集群节点资源竞争的问题。
附图说明
图1是一种容器流水线调度方法的工作模式示意图;
图2是一种容器批量调度方法的工作模式示意图;
图3是一种大数据或人工智能作业的组成示意图;
图4是一种容器批量调度中资源竞争问题的示意图;
图5是本申请实施例提供的一种容器调度方法的流程图;
图6是本申请实施例提供的一种容器调度方法的流程图;
图7是本申请实施例提供的另一种容器调度方法的流程图;
图8是本申请实施例提供的另一种容器调度方法的流程图;
图9是本申请实施例提供的一种容器调度装置的结构示意图;
图10是本申请实施例提供的一种容器调度方法的场景示意图;
图11是本申请实施例提供的一种容器调度装置的示例图;
图12是本申请实施例提供的一种容器调度方法的流程示例图;
图13是本申请实施例提供的一种作业划分的流程示例图;
图14是本申请实施例提供的一种作业类型校验的流程示例图;
图15是本申请实施例提供的一种调度作业排序的流程示例图;
图16是本申请实施例提供的一种节点过滤及适应度计算的流程示例图;
图17是本申请实施例提供的一种节点承载能力计算的流程示例图;
图18是本申请实施例提供的一种调度节点选择的流程示例图;
图19是本申请实施例提供的一种节点绑定的流程示例图;
图20是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
应当理解,此处所描述的具体实施仅仅用以解释本申请,并用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”后缀仅为了有利于本申请的说明,其本身没有特有的意义,因此,“模块”、“部件”或“单元”可以混合地使用。
相关技术中的容器调度方法中如Kubernetes以容器为单位采用流水线调度方式进行容器调度,即在一个调度周期内只对一个任务对应的容器进行调度,参见图1,流水线容器调度在处理大数据和人工智能作业时存在调度效率低下的缺陷。为了提高大数据和人工智能作业的调度效率,研究者提出了基于图2所示的批量调度方式,批量调度方式是在每个调度周期内将包含不同类型任务的大数据或人工智能作业作为调度单元进行调度。参见图3,大数据或人工智能作业往往包含不同的任务,相关技术中的批量调度方式做出调度决策时需要遍历一个大数据或人工智能作业中所有任务的资源需求量,导致相关技术中业务作业的批量调度效率仍然较低。
相关技术中以Volcano为代表的批量调度方式将一个大数据和人工智能作业包含的所有任务都尽可能地调度到同一个节点上,由于这些任务需求的资源类型可能相同,这种批量调度方式容易导致节点资源竞争的问题,如图4所示,目前,相关技术中的批量调度方式仅在一定程度上提高了大数据和人工智能作业的调度效率,但仍存在调度效率低下、节点资源竞争的问题。针对上述问题,本发明实施例提供了一种容器调度方法以提高大数据和人工智能作业的调度效率,并缓解节点资源竞争的问题。
图5是本申请实施例提供的一种容器调度方法的流程图,本申请实施例可适用于大数据或人工智能作业调度的情况,该方法可以由容器调度装置来执行,该装置可以通过软件和/或硬件的方法实现,参见图5,本申请实施例提供的方法具体包括如下步骤:
步骤110、根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象。
其中,任务类型可以是大数据或人工智能具体业务对应的业务类型,可以表示作业需要实现的功能或者需要传输的数据等,业务作业可以是大数据或人工智能待执行的事务,业务作业可以为一个或多个,任务容器可以是用于处理业务的环境,一个资源对象可以包括一个或多个任务容器,可以理解的是,自定义描述资源对象可以是用户根据需求自定义设置的资源对象,自定义描述资源中管理的任务容器可以对应相同的任务类型。
在本申请实施例中,针对各业务作业确定其对应的任务类型,可以按照任务类型将业务作业分别划分到不同的自定义描述资源对象中,自定义描述资源对象于业务类型之间可以配置有对应关系,业务作业可以按照业务类型与自定义描述资源对象之间的对应关系进行划分。
步骤120、确定各集群节点对自定义描述资源对象内任务容器的适应度和承载能力。
其中,集群节点可以是处理业务作业的处理节点,集群节点的数量可以为一个或多个,不同的集群节点可以位于相同位置或者不同位置。适应度可以是集群节点对不同自定义描述资源对象中业务作业的匹配程度,该匹配程度可以由资源以及处理性能等因素确定,承载能力可以是集群节点容纳业务作业的能力,承载能力可以由自定义描述资源对象的资源消耗量以及集群节点剩余资源量确定。
可以确定出各集群节点分别与不同自定义描述资源对象中任务容器的适应度,以及各集群节点容纳不同自定义描述资源对象中任务容器的承载能力。
步骤130、根据适应度和承载能力配置集群节点与任务容器的调度关系。
其中,调度关系可以是将任务容器与集群节点的配置关系,具有调度关系的集群节点可以处理对应任务容器。
在本申请实施例中,可以针对各任务容器按照其对应的适应度和承载能力确定与任务容器最匹配的集群节点,可以建立任务容器与集群节点的调度关系。
本申请实施例,通过按照业务作业的任务类型划分任务容器到不同的自定义描述资源对象中,确定集群节点针对各自定义描述资源对象的中任务容器的适应度和承载能力,按照适应度和承载能力配置任务容器与集群节点间的调度关系,在实现任务容器的批量调度的同时,可提高容器调度效率,依据适应度和承载能力将任务容器分配到不同的集群节点,可提高任务容器与集群节点匹配度,缓解集群节点资源竞争的问题。
图6是本申请实施例提供的一种容器调度方法的流程图,本申请实施例是在上述申请实施例基础上的具体化,参见图6,本申请实施例提供的方法具体包括如下步骤:
步骤210、创建分别对应业务作业包括任务的任务容器,其中,任务容器包括镜像名称、容器启动 命令、容器启动参数和任务类型标签。
在本申请实施例中,业务作业中可以包括多个任务,该任务可以是集群节点的待处理事务,针对各业务作业可以为其包含的任务创建对应的任务容器,其中,每个任务容器可以存在镜像名称、容器启动命令、容器启动参数以及任务类型标签。例如,在Kubernetes下使Kubernetes API-Server的容器创建模块为作业中的每一个任务创建与其对应的Kubernetes Pod。
步骤220、创建对应业务作业的自定义描述资源对象,其中,自定义描述资源对象包括名称、作业类型标签和调度优先级标签。
可以为业务作业创建自定义描述资源对象,并为自定义描述资源对象设置名称、作业类型标签和调度优先级标签等信息。
步骤230、按照任务容器的任务类型标签将各任务容器划分到具有匹配的作业类型标签的自定义描述资源对象。
在本申请实施例中,可以将任务容器按照其配置的任务类型标签划分到不同的自定义描述资源对象中,该自定义描述资源对象具体的任务类型标签可以与任务容器的任务类型标签相匹配,该匹配可以包括任务类型表示相同或者相关联。
步骤240、根据资源对象类型筛选自定义描述资源对象。
可以提取自定义描述资源对象的类型,若提取自定义描述资源对象的类型与设置的资源对象的类型相同,则对该自定义描述资源对象做进一步处理,若提取自定义描述资源对象的类型与设置的资源对象的类型不同,则不对该自定义描述资源对象进行处理。
步骤250、根据任务队列对各自定义描述资源对象进行排序。
其中,任务队列可以是暂存任务容器的存储空间,任务队列中的自定义描述资源对象可以采用先入先出的输出决策。
在本申请实施例中,可以将各自定义描述资源对象输入到任务队列,在任务队列中对各自定义描述资源对象进行排序,便于后续的任务容器调度,其中,任务队列的数量可以为一个或多个,例如,可以将相同资源对象类型的自定义描述资源对象存储到相同的任务队列,可以将不同的自定义描述资源对象分别存储到不同的任务队列。
步骤260、确定各集群节点对自定义描述资源对象内任务容器的适应度和承载能力。
步骤270、根据适应度和承载能力配置集群节点与任务容器的调度关系。
在上述申请实施例的基础上,根据任务队列对各所述自定义描述资源对象进行排序,包括以下至少之一:在所述任务队列中按照入队顺序对各所述自定义描述资源对象进行排序;在所述任务队列中按照调度优先级对各所述自定义描述资源对象进行排序。
其中,入队顺序可以是自定义描述资源对象到达任务队列的时间顺序,调度优先级可以是自定义描述资源对象被调度先后顺序。
在本申请实施例中,各自定义描述资源对象可以在任务队列中按照其到达任务队列的入队顺序进行排序,先到达任务队列的自定义描述资源对象可以先输出,还可以包括自定义描述资源对象在任务队列中按照其配置的调度优先级进行排序,自定义描述资源对象中调度优先级高的任务容器先输出。还可以综合入队顺序以及调度优先级对自定义描述资源对象进行排序,例如,任务队列中按照调度优先级从高到低依次排列各自定义描述资源对象,针对具有相同调度优先级的多个自定义描述资源对象,可以按照上述多个自定义描述资源对象的入队顺序依次在任务队列中进行排序。
图7是本申请实施例提供的另一种容器调度方法的流程图,本申请实施例是在上述申请实施例基础上的具体化,参见图7,本申请实施例体提供的方法具体包括如下步骤:
步骤310、根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象。
步骤320、在各自定义描述资源对象中获取目标资源对象,并获取各集群节点的资源信息。
其中,目标资源对象可以是多个自定义描述资源对象中当前被处理的资源对象,目标资源对象可以随机获取或者从任务队列中选取。
在本申请实施例中,可以在各个自定义描述资源对象中当前待处理的资源对象记为目标资源对象,同时,可以提取各集群节点的资源信息,该资源信息可以反映不同集群节点当前性能状况,该资源信息可以包括集群节点的资源类型、资源总量、剩余资源量等信息。
步骤330、在各集群节点中剔除状态信息中包含污点标签的集群节点;在各集群节点中剔除资源剩余量小于资源需求量的集群节点。
其中,污点标签可以是集群节点上定义的键值型属性数据,在集群节点的污点标签存在时,该集群节点拒绝调度任务容器。资源需求量可以是表示自定义描述资源对象中任务容器需要的资源量。
可以通过状态信息对各集群节点进行筛选,该筛选过程可以包括提取各集群节点的状态信息,若提 取到的状态信息存在污点标签,则该状态信息对应的集群节点拒绝调度任务容器,则可以将该集群节点剔除。还可以使用资源信息对集群节点进行筛选,具体的筛选过程可以包括提取各集群节点资源信息中的资源剩余量,若资源剩余量小于自定义描述资源对象中任务容器需要的总资源量,则确定集群节点无法容纳任务容器进行调度,则可以将该集群节点剔除。
步骤340、在目标资源对象中提取第一个任务容器记为示例容器。
其中,示例容器可以是目标资源对象中用于衡量容器资源状态的基准任务容器,示例容器可以是目标资源对象中第一个加入的任务容器,示例容器的选择不限于选择第一个任务容器,还可以在目标资源对象中随机选择任务容器或者以目标资源对象中所有任务容器的资源平均量构建虚拟的示例容器等。
在本申请实施例中,可以在记为目标资源对象中的自定义描述资源对象中选择第一个任务容器作为示例容器。
步骤350、提取示例容器的资源需求量。
可以提取示例容器的容器列表,按照容器列表确定各容器对象的不同资源的资源需求量,例如,可以提取示例容器的容量列表中不同容器对象对中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)、内存、磁盘和网络带宽等资源的需求量等,可以将各相同资源的总和作为资源需求量。
步骤360、根据资源需求量与资源信息中资源剩余量的匹配度确定适应度。
其中,资源剩余量可以是各集群节点中不同类型资源剩余未占用的值,例如,集群节点剩余CPU量、剩余GPU量、剩余内存、剩余磁盘以及剩余网络带宽等。
在本申请实施例中,可以基于任务容器的资源需求量以及集群节点的资源剩余量确定出匹配度,例如,可以将资源需求量与资源剩余量的比值作为匹配度,该比值在小于1的情况下越接近1则其越匹配,在此不对通过资源需求量和资源剩余量确定匹配度的方式进行限制。
步骤370、按照资源需求量与资源信息中资源剩余量确定集群节点对任务容器的容纳数量作为承载能力。
其中,容纳数量可以是集群节点中容纳示例容器的数量,该容纳数量可以由资源剩余量和资源需求量的商值确定。
在本申请实施例中,针对各集群节点可以提取其对应的资源剩余量,可以将该资源剩余量与示例容器的资源需求量的商值作为容纳数量,该容纳数量可以是集群节点对任务容器的承载能力。
步骤380、根据适应度和承载能力配置集群节点与任务容器的调度关系。
在上述申请实施例的基础上,根据所述资源需求量与所述资源信息中资源剩余量的匹配度确定所述适应度,包括:
按照LeastRequestedPriority策略和BalancedResourceAllocation策略确定所述资源需求量与所述资源剩余量的匹配度。
在本申请实施例中,可以按照LeastRequestedPriority策略和BalancedResourceAllocation策略共同确定出各集群节点与示例容器的匹配度,其中,LeastRequestedPriority策略可以包括由资源需求量与资源总量的比值确定,该匹配度=(资源剩余量-资源需求量)/资源总量,该匹配度的取值越高,则集群节点对目标资源对象中任务容器的适应度越高。而BalancedResourceAllocation策略可以包括各集群节点CPU和内存使用率方差越小的节点权重越高,可以在LeastRequestedPriority策略确定出匹配度后,按照各集群节点的CPU和内存的使用率再使用权重对匹配度进行调整。
在上述申请实施例的基础上,按照所述资源需求量与所述资源信息中资源剩余量确定集群节点对所述任务容器的容纳数量,包括:
提取示例容器中的各类资源的资源需求量以及资源信息中各类资源的所述资源剩余量;针对各类资源确定所述资源剩余量与资源需求量的商值,将各商值中的最小值作为容纳数量。
在本申请实施例中,可以提取示例容器中不同资源的资源需求量,例如,CPU、GPU、内存、磁盘和网络带宽的需求量。针对集群节点也分别提取上述类型资源的资源剩余量,对于相同资源,可以将资源剩余量与资源需求量的商作为对应资源的资源容纳数,可以将取值最小的资源容纳数作为集群节点与对于示例容器的容纳数量。
图8是本申请实施例提供的另一种容器调度方法的流程图,本申请实施例是在上述申请实施例基础上的具体化,参见图8,本申请实施例体提供的方法具体包括如下步骤:
步骤410、根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象。
步骤420、确定各集群节点对自定义描述资源对象内任务容器的适应度和承载能力。
步骤430、针对相同任务容器按照适应度的取值从大到小对各集群节点进行排序。
在本申请实施例中,可以针对相同任务容器的对适应度按照从大到小的顺序进行排序,可以使用适 应度与集群节点的对应关系,将按照该排序排列各集群节点,可以理解的是,每个任务容器可以存在各自对应的集群节点构成的排序序列。
步骤440、将集群节点排序中前百分之二十的集群节点作为候选节点。
可以在集群节点构成的排序中可以将前百分之二十的集群节点选择为候选节点,该候选节点可以用于进行任务容器调度。
步骤450、确定任务容器的容器总数与候选节点的节点总数的商值。
其中,节点总数可以是集群节点中被选择为候选节点的总数,容器总数可以是自定义资源描述对象中包括的任务容器的总数。
在本申请实施例中,可以计算容器总数与节点总数的商值。
步骤460、按照商值和承载能力将任务容器分配到各候选节点。
在本申请实施例中,可以按照以上述过程确定的商值以及承载能力为依据为任务容器分配候选节点,将自定义描述资源对象的任务容器分配到候选节点。例如,可以将商值对应数量的任务容器分配到每个候选节点,且使得每个候选节点中分配到的任务容器的数量不得大于承载能力。
步骤470、建立各任务容器与各自对应的候选节点的调度关系。
可以在任务容器中存储与其分配到的候选节点的标识信息,可以将使用该标识信息表明任务容器与候选节点的调度关系,例如,可以在任务容器设置一个节点名称字段,可以使用节点名称字段存储任务容器分配到的候选节点的节点名称。
在上述申请实施例的基础上,按照所述商值和所述承载能力将所述任务容器分配到各所述候选节点,包括:
按照各候选节点的适应度从高到低的顺序判断候选节点的承载能力是否大于或等于商值;若是,则为候选节点配置商值数量的任务容器;若否,则为候选节点配置承载能力数量的任务容器。
可以按照承载能力将候选节点进行排序,在排序中依次选择一个候选节点进行判断,确定该候选节点的承载能力是否大于节点总数与容器总数的商值,若大于或等于,则按照为当前的候选节点配置商值数量的任务容器,若小于,则为当前的候选节点配置承载能力数量的任务容器。
在上述申请实施例的基础上,确定存在所述任务容器未分配到所述候选节点,则按照所述集群节点排序中次前百分之二十的所述集群节点作为新的候选节点,将未分配的所述任务容器分配到所述新的候选节点。
在完成候选节点的分配后,可以对各任务容器进行判断,确定任务容器是否均已被分配到候选节点,例如,判断任务容器中节点名称字段是否为空,若是,则确定该任务容器未被分配,这种情况下,当前选择的候选节点无法完成调度所有任务容器的需求,可以在集群节点排序中以当前选择的候选节点为起点,重新在集群节点排序中选择起点后面的百分之二十的集群节点作为新的候选节点,可以将未分配的任务容器按照上述的分配方式分配给新的候选节点。
在上述申请实施例的基础上,建立各任务容器与各自对应的候选节点的调度关系,包括:
将任务容器的节点名称字段设置为其对应候选节点的标识信息。
在本申请实施中,调度关系的绑定可以是将任务容器的节点名称字段设置为该任务容器所属的候选节点的标识信息,该标识信息可以包括候选节点的网络地址、节点名称或者节点编号等。
图9是本申请实施例提供的一种容器调度装置的结构示意图,可执行本申请任意实施例提供的容器调度方法,具备执行方法相应的功能模块和有益效果。该装置可以由软件和/或硬件实现,参见图9,本申请实施例提供的装置具体包括如下:
任务划分模块501,用于根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象。
集群参数模块502,用于确定各所述集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力。
调度设置模块503,用于根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系。
本申请实施例,通过任务划分模块按照业务作业的任务类型划分任务容器到不同的自定义描述资源对象中,集群参数模块确定集群节点针对各自定义描述资源对象的中任务容器的适应度和承载能力,调度设置模块按照适应度和承载能力配置任务容器与集群节点间的调度关系,在实现任务容器的批量调度的同时,可提高容器调度效率,依据适应度和承载能力将任务容器分配到不同的集群节点,可提高任务容器与集群节点匹配度,缓解集群节点资源竞争的问题。
在上述申请实施例的基础上,装置还包括:
队列处理模块,用于根据任务队列对各所述自定义描述资源对象进行排序。
在上述申请实施例的基础上,装置还包括:
资源对象筛选模块,用于根据资源对象类型筛选所述自定义描述资源对象。
在上述申请实施例的基础上,任务划分模块501包括:
容器创建单元,用于创建分别对应所述业务作业包括任务的所述任务容器,其中,所述任务容器包括镜像名称、容器启动命令、容器启动参数和任务类型标签。
资源对象单元,用于创建对应所述业务作业的自定义描述资源对象,其中,所述自定义描述资源对象包括名称、作业类型标签和调度优先级标签。
划分执行单元,用于按照所述任务容器的任务类型标签将各所述任务容器划分到具有匹配的作业类型标签的所述自定义描述资源对象。
在上述申请实施例的基础上,集群参数模块502包括:
在各所述自定义描述资源对象中获取目标资源对象,并获取各所述集群节点的资源信息;
示例确定单元,用于在所述目标资源对象中提取第一个所述任务容器记为示例容器。
资源提取单元,用于提取所述示例容器的资源需求量。
适应度单元,用于根据所述资源需求量与所述资源信息中资源剩余量的匹配度确定所述适应度。
承载能力单元,用于按照所述资源需求量与所述资源信息中资源剩余量确定集群节点对所述任务容器的容纳数量作为所述承载能力。
在上述申请实施例的基础上,集群参数模块502还包括:
集群筛选单元,用于在各所述集群节点中剔除状态信息中包含污点标签的集群节点;在各所述集群节点中剔除资源剩余量小于所述资源需求量的集群节点。
在上述申请实施例的基础上,适应度单元,具体用于:按照LeastRequestedPriority策略和BalancedResourceAllocation策略确定所述资源需求量与所述资源剩余量的匹配度。
在上述申请实施例的基础上,承载能力单元具体用于:提取所述示例容器中的各类资源的所述资源需求量以及所述资源信息中各类资源的所述资源剩余量;针对所述各类资源确定所述资源剩余量与所述资源需求量的商值,将各所述商值中的最小值作为所述容纳数量。
在上述申请实施例的基础上,调度设置模块503包括:
排序单元,用于针对相同所述任务容器按照所述适应度的取值从大到小对各所述集群节点进行排序。
候选选择单元,用于将所述集群节点排序中前百分之二十的所述集群节点作为候选节点。
商值确定单元,用于确定所述任务容器的容器总数与所述候选节点的节点总数的商值。
任务分配单元,用于按照所述商值和所述承载能力将所述任务容器分配到各所述候选节点。
关系建立单元,用于建立各所述任务容器与各自对应的所述候选节点的调度关系。
在上述申请实施例的基础上,任务分配单元具体用于:按照各所述候选节点的所述适应度从高到低的顺序判断所述候选节点的所述承载能力是否大于或等于所述商值;若是,则为所述候选节点配置所述商值数量的所述任务容器;若否,则为所述候选节点配置所述承载能力数量的所述任务容器。
在上述申请实施例的基础上,调度设置模块503还包括:
异常处理单元,用于确定存在所述任务容器未分配到所述候选节点,则按照所述集群节点排序中次前百分之二十的所述集群节点作为新的候选节点,将未分配的所述任务容器分配到所述新的候选节点。
在上述申请实施例的基础上,关系建立单元具体用于:将所述任务容器的节点名称字段设置为其对应所述候选节点的标识信息。
在上述申请实施例的基础上,队列处理模块中具体用于:在所述任务队列中按照入队顺序对各所述自定义描述资源对象进行排序;在所述任务队列中按照调度优先级对各所述自定义描述资源对象进行排序。
在一个示例性的实施方式中,基于Kubernetes架构的容器调度为例,参见图10,Kubernetes集群由一个Kubernetes Master节点和若干个Kubernetes Node节点组成。在本申请实施例中,Kubernetes Master节点负责对待调度作业进行调度。Kubernetes Node节点负责根据Kubernetes Master节点绑定的结果部署并运行待调度任务。参见图11,实现本申请实施例提供的容器调度装置由六个组件构成,包括:Kubernetes API-Server的容器创建模块、Kubernetes API-Server的事件监听模块、Kubernetes Etcd、基于Kubernetes的批量调度器、基于Kubernetes的批量调度器中的容器调度队列和Kubernetes API-Server的调度绑定模块。其中,Kubernetes API-Server的容器创建模块用于为大数据或人工智能作业中的每一个任务创建与其对应的容器并根据任务类型标签将任务划分至不同的待调度作业中;Kubernetes API-Server的事件监听模块用于监听Kubernetes API-Server的容器创建模块的创建事件并对待调度作业的类型进行校验;Kubernetes Etcd用于存储本申请提出的批量调度装置中各个组件运行的状态信息和集群信息;其中,批量调度装置中各个组件运行的状态信息包括组件运行错误的时间点和错误事件日志;集群信息包括集群各 节点和容器的状态信息和资源信息;基于Kubernetes的批量调度器中的容器调度队列用于根据待调度作业的调度优先级以及加入队列的时间点进行调度顺序排序;基于Kubernetes的批量调度器用于对容器调度队列弹出的待调度作业进行调度;基于Kubernetes的批量调度器包括过滤模块、适应度计算模块、承载能力计算模块和调度节点选择模块;过滤模块用于过滤掉不满足状态校验和资源校验的节点;适应度计算模块用于对通过过滤模块的节点进行适应度计算;承载能力计算模块用于对通过过滤模块的节点进行承载能力计算;调度节点选择模块用于根据节点的适应度和承载能力值为待调度作业中的每一个任务选择最合适的调度节点;Kubernetes API-Server的调度绑定模块用于根据基于Kubernetes的批量调度器发送的调度结果,对待调度作业中的每一个任务与其调度节点进行绑定。
图12是本申请实施例提供的一种容器调度方法的流程示例图,参见图12,本申请实施例提供的方法包括如下七个步骤:作业划分、作业类型校验、待调度作业排序、节点过滤及适应度计算、节点承载能力计算、调度节点选择和节点绑定。
首先,Kubernetes API-Server的容器创建模块根据用户提交的作业配置请求创建相应的容器并根据任务类型标签将作业中的任务划分至不同的待调度作业中。Kubernetes API-Server的事件监听模块对待调度作业的类型进行校验。其次,基于Kubernetes的批量调度器中的容器调度队列对通过作业类型检验的待调度作业进行调度优先级排序并从队头弹出待调度作业进行调度。再次,基于Kubernetes的批量调度器的过滤模块和适应度计算模块分别进行节点过滤及适应度计算。接着,基于Kubernetes的批量调度器的承载能力计算模块计算节点的承载能力。随后,基于Kubernetes的批量调度器的调度节点选择模块根据节点的适应度和承载能力为待调度作业中的每一个任务选择最合适的调度节点。最后,Kubernetes API-Server的调度绑定模块将待调度作业的每一个任务和其调度节点进行绑定。
基于Kubernetes架构的容器批量调度方法的具体处理步骤如下:
第一步:作业划分
Kubernetes API-Server的容器创建模块为大数据或人工智能作业中的每一个任务创建与其对应的容器并根据任务类型标签将任务划分至不同的待调度作业中,参见图13所示。最终划分得到的每个待调度作业中的任务类型一致。
本步骤又可以包括下列步骤:
(1)用户通过Kubernetes中的kubectl命令行工具向Kubernetes API-Server的容器创建模块发送大数据或人工智能作业的创建请求。Kubernetes API-Server的容器创建模块为作业中的每一个任务创建与其对应的Kubernetes Pod(以下简称Pod)。创建的Pod对象包括Pod中容器的镜像名称、容器启动命令、容器启动参数和Pod对应的任务类型标签。
(2)若用户提交的作业类型为大数据或人工智能作业,则Kubernetes API-Server的容器创建模块为该作业创建描述待调度作业的Kubernetes CRD(以下简称CRD)资源对象。创建的CRD资源对象中包括CRD资源对象的名称、CRD资源对象对应的作业类型标签和CRD资源对象的调度优先级标签。Kubernetes API-Server的容器创建模块将具有相同任务类型标签的Pod划分至同一个待调度作业对应的CRD资源对象中。具体而言,Kubernetes API-Server的容器创建模块遍历大数据或人工智能作业的所有Pod对象,将任务类型标签一致的Pod对象划分至同一个待调度作业对应的CRD资源对象中。其中,CRD资源对象的作业类型标签与它包含的所有Pod对象的任务类型标签一致。
(3)Kubernetes API-Server的容器创建模块将待调度作业对应的CRD资源对象实例化并存储在Kubernetes API-Server中。
第二步:作业类型校验
Kubernetes API-Server的事件监听模块对待调度作业的类型进行校验,参见图14所示。只有通过类型校验的待调度作业才能送入容器调度队列。
本步骤又可以包括下列步骤:
(1)Kubernetes API-Server的事件监听模块监听到Kubernetes API-Server的容器创建模块中的对象创建事件。
(2)Kubernetes API-Server的事件监听模块会对该创建事件中待调度作业对应的对象的类型进行校验。如果该待调度作业创建的资源类型不为描述批量作业的CRD资源类型,那么视为校验不通过。Kubernetes API-Server的事件监听模块将校验结束的时间点和校验错误写入Kubernetes的事件日志并存入Kubernetes Etcd中;否则,视为校验通过,将该待调度作业对应的CRD资源对象送入基于Kubernetes的批量调度器中的容器调度队列中并执行第三步。
第三步:待调度作业排序
根据待调度作业对应的CRD资源对象中定义的作业的调度优先级和它们到达基于Kubernetes的批量调度器中的容器调度队列的时间顺序,基于Kubernetes的批量调度器中的容器调度队列对它们进行调度 顺序的排序,参见图15所示。
本步骤又可以包括下列步骤:
(1)基于Kubernetes的批量调度器中的容器调度队列监听待调度作业对应的CRD资源对象的加入事件。新的待调度作业对应的CRD资源对象默认会加入到队尾。
(2)当基于Kubernetes的批量调度器中的容器调度队列收到有新的待调度作业对应的CRD资源对象加入时,会根据队列中所有待调度作业对应的CRD资源对象中的调度优先级标签对它们进行降序排序,即优先级最高的待调度作业对应的CRD资源对象被放置在队头,优先级最低的待调度作业对应的CRD资源对象被放置在队尾。
(3)若基于Kubernetes的批量调度器中的容器调度队列中包含多个优先级相同的待调度作业对应的CRD资源对象,则按照这些CRD资源对象加入基于Kubernetes的批量调度器中的容器调度队列的时间顺序进行排序(即First-In-First-Out策略)。
(4)在所有待调度作业对应的CRD资源对象完成排序后,基于Kubernetes的批量调度器中的容器调度队列从队头弹出待调度作业对应的CRD资源对象执行第四步。
第四步:节点过滤及适应度计算
基于Kubernetes的批量调度器的过滤模块根据集群中的节点信息和待调度作业对应的CRD资源对象的资源信息对集群节点进行过滤初选,随后,基于Kubernetes的批量调度器的适应度计算模块对通过过滤初选的节点进行节点适应度计算,参见图16所示。
本步骤又可以包括下列步骤:
(1)基于Kubernetes的批量调度器监听基于Kubernetes的批量调度器中的容器调度队列的待调度作业对应的CRD资源的弹出事件。
(2)当监听到待调度作业对应的CRD资源对象从基于Kubernetes的批量调度器中的容器调度队列的队头弹出时,基于Kubernetes的批量调度器向Kubernetes Etcd发起请求获取当前集群中各个节点的状态信息和资源信息。节点的状态信息主要表示该节点是否可以被调度;节点的资源信息指的是节点上各类资源的总量和剩余量大小。节点的各类资源包括CPU、GPU、内存、磁盘和网络带宽资源。
(3)基于Kubernetes的批量调度器的过滤模块对获取到的各个节点的状态进行状态校验。若当前节点的状态中包含NoExecute或者NoSchedule污点,表示该节点被设置为不可调度,该节点状态校验不通过;否则,该节点通过状态校验。
(4)基于Kubernetes的批量调度器的过滤模块对通过状态校验的节点进行资源校验。基于Kubernetes的批量调度器的过滤模块从待调度作业中的CRD资源对象取出第一个Pod对象作为示例Pod对象。随后,基于Kubernetes的批量调度器的过滤模块获取该示例Pod对象的各类资源需求量。其中,示例Pod对象需求的各类资源包括CPU、GPU、内存、磁盘和网络带宽资源。对于任意一种类型的资源,若示例Pod对象的资源需求量大于该节点的资源剩余量,则该节点资源校验不通过;否则该节点通过资源校验。
(5)基于Kubernetes的批量调度器的过滤模块将通过状态校验和资源校验的节点存储在可调度节点列表中。
(6)在集群内所有节点完成过滤步骤后,基于Kubernetes的批量调度器的适应度计算模块对可调度节点列表中的节点依次遍历。
(7)对于可调度节点列表中的每一个节点,基于Kubernetes的批量调度器的适应度计算模块会根据待调度作业的CRD资源对象中的示例Pod的各类资源需求量和该节点的各类资源剩余量进行节点适应度计算。其中,节点的适应度数值具体为节点的得分。节点的得分越高表示示例Pod对象部署在该节点上的适合程度越高。节点的得分通过Kubernetes原生的LeastRequestedPriority和BalancedResourceAllocation优选策略计算得到。
(8)基于Kubernetes的批量调度器的适应度计算模块存储可调度节点列表中每个节点的适应度数值并执行第五步。
第五步:节点承载能力计算
基于Kubernetes的批量调度器的承载能力计算模块对可调度节点列表中的每个节点计算其能容纳待调度作业对应的CRD资源对象中待调度Pod对象的数量作为该节点的承载能力,参见图17所示。
本步骤又可以包括下列步骤:
(1)基于Kubernetes的批量调度器的承载能力计算模块从待调度作业对应的CRD资源对象中取出第一个任务对应的Pod对象作为示例Pod对象。基于Kubernetes的批量调度器的承载能力计算模块获取示例Pod对象的各类资源需求量。示例Pod对象的需求的各类资源包括CPU、GPU、内存、磁盘和网络带宽资源。具体是基于Kubernetes的批量调度器的承载能力计算模块获取示例Pod对象中的容器列表并 依次遍历该容器列表中的容器对象,获取每个容器对象的各类资源需求量,最后将每个容器对象的各类资源需求量进行累加汇总得到该示例Pod对象的各类资源需求量。
(2)基于Kubernetes的批量调度器的承载能力计算模块获取每个节点上各种类型资源的剩余量大小。节点的各类资源包括CPU、GPU、内存、磁盘和网络带宽资源。
(3)对于每一种类型的资源,基于Kubernetes的批量调度器的承载能力计算模块对节点的资源剩余量和示例Pod对象的资源需求量作商,取整数部分。所有类型资源中商最小的数值就是该节点能够容纳待调度作业对应的CRD资源对象中示例Pod对象的最大数量。
(4)基于Kubernetes的批量调度器的承载能力计算模块将所有类型资源中商最小的数值作为该节点的承载能力值进行存储并执行第六步。
第六步:调度节点选择
基于Kubernetes的批量调度器的调度节点选择模块根据可调度节点列表中节点的适应度和节点的承载能力为待调度CRD资源对象中的每一个任务对应的Pod对象选择最合适的调度节点,参见图18所示。
本步骤又可以包括下列步骤:
(1)基于Kubernetes的批量调度器的调度节点选择模块获取可调度节点列表中每个节点的适应度和承载能力数值。节点的适应度通过第三步节点过滤及适应度计算得到,节点的承载能力通过第四步节点承载能力计算得到。
(2)基于Kubernetes的批量调度器的调度节点选择模块根据每个节点的适应度和承载能力数值为待调度作业对应的CRD资源对象中的每个任务对应的Pod对象选择最合适的调度节点。其中,适应度数值越高的节点表示任务调度在该节点的适应程度高,因此在调度节点选择时会被优先考虑。承载能力数值表示该节点部署待调度作业中任务的数量上限,决定了该节点可调度的任务数量。具体而言,基于Kubernetes的批量调度器的调度节点选择模块首先根据节点的适应度数值对可调度节点列表中的节点进行降序排序。
(3)基于Kubernetes的批量调度器的调度节点选择模块选取可调度节点列表中适应度数值前20%的可调度节点并将它们存储为候选调度节点列表(候选调度节点列表至少包含1个候选节点)。
(4)基于Kubernetes的批量调度器的调度节点选择模块将待调度作业对应的CRD资源对象中包含的任务对应的Pod对象的数量与候选调度节点列表中的节点数量作商,以确定候选调度节点列表中的每一个候选调度节点需要部署的待调度作业对应的CRD资源对象中任务对应的Pod对象的数量。
(5)若该商值小于1,说明候选调度节点列表中的节点数量超过待调度作业对应的CRD资源对象任务对应的Pod的数量。因此,从适应度最高的候选调度节点开始,每一个节点部署待调度作业对应的CRD资源对象中一个任务对应的Pod对象,直到待调度作业对应的CRD资源对象中所有的任务对应的Pod都被选定调度节点为止。基于Kubernetes的批量调度器的调度节点选择模块定义名为调度数量的变量,该变量表示候选节点列表中的候选节点分别部署的待调度作业对应的CRD资源对象中任务对应的Pod对象的数量。之后,基于Kubernetes的批量调度器的调度节点选择模块把参与部署待调度任务对应的CRD资源对象的所有节点的调度数量的数值设置为1;否则,候选节点列表中的节点按照此商值部署待调度CRD资源对象。具体而言,对于候选节点列表中的任意节点,若商值大于该节点的承载能力值时,则该节点的调度数量值被设置为该节点的承载能力值;否则,该节点的调度数量值被设置为商值。
(6)若待调度作业对应的CRD资源对象中存在任务对应的Pod对象未找到最合适的调度节点,则待调度作业全体被视为调度失败。基于Kubernetes的批量调度器的调度节点选择模块将调度错误发生的位置和错误的原因写入Kubernetes事件日志并存入Kubernetes Etcd中。在所有候选节点列表中的节点都完成调度数量值的设置后,基于Kubernetes的批量调度器的调度节点选择模块对候选调度节点列表中所有节点的调度数量值进行存储。
(7)基于Kubernetes的批量调度器的调度节点选择模块对候选节点列表中所有节点的调度数量值进行累加汇总。若候选节点列表中所有节点的调度数量总和小于待调度作业对应的CRD资源对象中任务对应的Pod的数量,则基于Kubernetes的批量调度器的调度节点选择模块将候选调度节点列表清空并选取节点适应度数值次高的20%节点,将它们存储为候选调度节点列表并重复步骤4-6直至待调度作业对应的CRD资源对象中的所有任务对应的Pod对象都被指定调度节点为止;若调度数量总和等于待调度CRD资源对象中Pod的数量,则表示待调度作业对应的CRD资源对象中的所有任务对应的Pod对象都找到最合适的调度节点。基于Kubernetes的批量调度器的调度节点选择模块将待调度作业CRD资源对象中的每个任务对应的Pod对象和其对应的调度节点作为调度结果发送给Kubernetes API-Server的调度绑定模块并执行第七步。
第七步:节点绑定
Kubernetes API-Server的调度绑定模块将待调度作业对应的CRD资源对象中的每一个任务对应的 Pod对象与其对应的调度节点进行绑定,参见图19所示。
本步骤又可以包括下列步骤:
(1)Kubernetes API-Server的调度绑定模块监听基于Kubernetes的批量调度器的调度结果发送事件。
(2)若监听到来自基于Kubernetes的批量调度器发送的调度结果,Kubernetes API-Server的调度绑定模块从调度结果中解析出待调度作业对应的CRD资源对象中的每一个任务对应的Pod对象和其调度节点。若Kubernetes API-Server的调度绑定模块解析对象错误,则Kubernetes API-Server的调度绑定模块将解析错误的位置和错误的原因写入Kubernetes事件日志并存入Kubernetes Etcd中。
(3)Kubernetes API-Server的调度绑定模块对待调度作业对应的CRD资源对象中的每一个任务对应的Pod对象和其调度节点进行绑定操作。具体而言,Kubernetes API-Server的调度绑定模块遍历待调度作业对应的CRD资源对象中任务对应的Pod对象并将待调度任务对应的Pod对象中的NodeName字段设置为其调度节点的名称,并异步更新Kubernetes API-Server中该Pod的NodeName字段设置为该节点的名称。若存在待调度作业对应的CRD资源对象中部分任务对应的Pod对象与其调度节点无法绑定的情况,则Kubernetes API-Server的调度绑定模块将绑定错误的位置和错误的原因写入Kubernetes事件日志并存入Kubernetes Etcd中。
(4)当待调度作业对应的CRD资源对象中所有任务对应的Pod对象都完成了与其调度节点的绑定操作,则该待调度作业对应的CRD资源对象完成调度流程。
图20是本申请实施例提供的一种电子设备的结构示意图,该电子设备包括处理器60、存储器61、输入装置62和输出装置63;电子设备中处理器60的数量可以是一个或多个,图20中以一个处理器60为例;电子设备中处理器60、存储器61、输入装置62和输出装置63可以通过总线或其他方式连接,图20中以通过总线连接为例。
存储器61作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的容器调度装置对应的模块(任务划分模块501、集群参数模块502和调度设置模块503)。处理器60通过运行存储在存储器61中的软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述的容器调度方法。
存储器61可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器61可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器61可进一步包括相对于处理器60远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置62可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置63可包括显示屏等显示设备。
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种容器调度方法,该方法包括:
根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象;
确定各所述集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力;
根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
值得注意的是,上述装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质 上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。

Claims (15)

  1. 一种容器调度方法,包括:
    根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象;
    确定集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力;
    根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系。
  2. 根据权利要求1所述方法,还包括:
    根据任务队列对所述至少一个自定义描述资源对象进行排序。
  3. 根据权利要求1所述方法,还包括:
    根据资源对象类型筛选所述至少一个自定义描述资源对象。
  4. 根据权利要求1所述方法,其中,所述根据任务类型将业务作业对应的任务容器划分到至少一个自定义描述资源对象,包括:
    创建对应所述业务作业的所述任务容器,其中,所述任务容器对应所述业务作业中包括的任务,所述任务容器包括镜像名称、容器启动命令、容器启动参数和任务类型标签;
    创建对应所述业务作业的自定义描述资源对象,其中,所述自定义描述资源对象包括名称、作业类型标签和调度优先级标签;
    按照所述任务容器的任务类型标签将所述任务容器划分到具有匹配的作业类型标签的所述自定义描述资源对象。
  5. 根据权利要求1或2所述方法,其中,所述确定集群节点对所述自定义描述资源对象内所述任务容器的适应度和承载能力,包括:
    在所述至少一个自定义描述资源对象中获取目标资源对象,并获取集群节点的资源信息;
    在所述目标资源对象中提取第一个所述任务容器记为示例容器;
    提取所述示例容器的资源需求量;
    根据所述资源需求量与所述资源信息中资源剩余量的匹配度确定所述适应度;
    按照所述资源需求量与所述资源信息中资源剩余量确定集群节点对所述任务容器的容纳数量,以将所述容纳数量作为所述承载能力。
  6. 根据权利要求5所述方法,还包括:
    在至少一个集群节点中剔除状态信息中包含污点标签的集群节点;
    在所述至少一个集群节点中剔除资源剩余量小于所述资源需求量的集群节点。
  7. 根据权利要求5所述方法,其中,所述根据所述资源需求量与所述资源信息中资源剩余量的匹配度确定所述适应度,包括:
    按照LeastRequestedPriority策略和BalancedResourceAllocation策略确定所述资源需求量与所述资源剩余量的匹配度,以将所述匹配度作为所述适应度。
  8. 根据权利要求5所述方法,其中,所述按照所述资源需求量与所述资源信息中资源剩余量确定集群节点对所述任务容器的容纳数量,包括:
    提取所述示例容器中的多类资源的所述资源需求量以及所述资源信息中所述多类资源的所述资源剩余量;
    针对所述多类资源确定所述资源剩余量与所述资源需求量的商值,将多个商值中的最小值作为所述容纳数量。
  9. 根据权利要求1所述方法,其中,所述根据所述适应度和所述承载能力配置所述集群节点与所述任务容器的调度关系,包括:
    针对相同所述任务容器按照所述适应度的取值从大到小对多个集群节点进行排序;
    将所述多个集群节点排序中前百分之二十的集群节点作为候选节点;
    确定所述任务容器的容器总数与所述候选节点的节点总数的商值;
    按照所述商值和所述承载能力将所述任务容器分配到多个候选节点;
    建立所述任务容器与各自对应的所述候选节点的调度关系。
  10. 根据权利要求9所述方法,其中,所述按照所述商值和所述承载能力将所述任务容器分配到多个候选节点,包括:
    按照所述多个候选节点的所述适应度从高到低的顺序判断每个候选节点的所述承载能力是否大于或等于所述商值;
    基于所述每个候选节点的所述承载能力大于或等于所述商值的判断结果,为所述每个候选节点配置所述商值数量的所述任务容器;
    基于所述每个候选节点的所述承载能力小于所述商值的判断结果,为所述每个候选节点配置所述承载能力数量的所述任务容器。
  11. 根据权利要求9所述方法,还包括:
    响应于确定存在所述任务容器未分配到所述候选节点,按照所述多个集群节点排序中次前百分之二十的集群节点作为新的候选节点,将未分配的所述任务容器分配到所述新的候选节点。
  12. 根据权利要求9所述方法,其中,所述建立所述任务容器与各自对应的所述候选节点的调度关系,包括:
    将所述任务容器的节点名称字段设置为所述任务容器对应的所述候选节点的标识信息。
  13. 根据权利要求2所述方法,其中,所述根据任务队列对所述至少一个自定义描述资源对象进行排序,包括以下至少之一:
    在所述任务队列中按照入队顺序对所述至少一个自定义描述资源对象进行排序;
    在所述任务队列中按照调度优先级对所述至少一个自定义描述资源对象进行排序。
  14. 一种电子设备,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-13中任一所述方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有一个或多个程序,所述一个或多个程序被所述一个或多个处理器执行,以实现如权利要求1-13中任一所述方法。
PCT/CN2023/087625 2022-04-15 2023-04-11 一种容器调度方法、电子设备和存储介质 WO2023198061A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210399975.0 2022-04-15
CN202210399975.0A CN114840304B (zh) 2022-04-15 2022-04-15 一种容器调度方法、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023198061A1 true WO2023198061A1 (zh) 2023-10-19

Family

ID=82566535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087625 WO2023198061A1 (zh) 2022-04-15 2023-04-11 一种容器调度方法、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN114840304B (zh)
WO (1) WO2023198061A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117240773A (zh) * 2023-11-15 2023-12-15 华北电力大学 一种电力通信网节点编排方法、装置、设备及介质
CN117435324A (zh) * 2023-11-28 2024-01-23 江苏天好富兴数据技术有限公司 基于容器化的任务调度方法
CN118093209A (zh) * 2024-04-26 2024-05-28 银河麒麟软件(长沙)有限公司 在离线混部动态调整应用优先级配置方法及装置
CN118276792A (zh) * 2024-06-04 2024-07-02 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) 一种Kubernetes卷资源自动清理方法及系统
CN118349336A (zh) * 2024-06-18 2024-07-16 济南浪潮数据技术有限公司 云计算平台中任务处理的方法、装置、设备、介质及产品

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840304B (zh) * 2022-04-15 2023-03-21 中兴通讯股份有限公司 一种容器调度方法、电子设备和存储介质
CN116501947B (zh) * 2023-06-21 2023-10-27 中国传媒大学 语义搜索云平台的构建方法、系统及设备和存储介质
CN117170811B (zh) * 2023-09-07 2024-09-13 中国人民解放军国防科技大学 一种基于volcano的节点分组作业调度方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072437A1 (en) * 2009-09-23 2011-03-24 International Business Machines Corporation Computer job scheduler with efficient node selection
CN106325998A (zh) * 2015-06-30 2017-01-11 华为技术有限公司 一种基于云计算的应用部署的方法和装置
CN111522639A (zh) * 2020-04-16 2020-08-11 南京邮电大学 Kubernetes集群架构系统下多维资源调度方法
CN113342477A (zh) * 2021-07-08 2021-09-03 河南星环众志信息科技有限公司 一种容器组部署方法、装置、设备及存储介质
CN114840304A (zh) * 2022-04-15 2022-08-02 中兴通讯股份有限公司 一种容器调度方法、电子设备和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783046B2 (en) * 2016-11-22 2020-09-22 Nutanix, Inc. Executing resource management operations in distributed computing systems
US11243818B2 (en) * 2017-05-04 2022-02-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler and workload manager that identifies and optimizes horizontally scalable workloads
CN108519911A (zh) * 2018-03-23 2018-09-11 上饶市中科院云计算中心大数据研究院 一种基于容器的集群管理系统中资源的调度方法和装置
CN111538586A (zh) * 2020-01-23 2020-08-14 中国银联股份有限公司 集群gpu资源管理调度系统、方法以及计算机可读存储介质
CN111858069B (zh) * 2020-08-03 2023-06-30 网易(杭州)网络有限公司 集群资源调度的方法、装置及电子设备
CN113204428B (zh) * 2021-05-28 2023-01-20 北京市商汤科技开发有限公司 资源调度方法、装置、电子设备以及计算机可读存储介质
CN114035941A (zh) * 2021-10-18 2022-02-11 阿里巴巴(中国)有限公司 资源调度系统、方法以及计算设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072437A1 (en) * 2009-09-23 2011-03-24 International Business Machines Corporation Computer job scheduler with efficient node selection
CN106325998A (zh) * 2015-06-30 2017-01-11 华为技术有限公司 一种基于云计算的应用部署的方法和装置
CN111522639A (zh) * 2020-04-16 2020-08-11 南京邮电大学 Kubernetes集群架构系统下多维资源调度方法
CN113342477A (zh) * 2021-07-08 2021-09-03 河南星环众志信息科技有限公司 一种容器组部署方法、装置、设备及存储介质
CN114840304A (zh) * 2022-04-15 2022-08-02 中兴通讯股份有限公司 一种容器调度方法、电子设备和存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117240773A (zh) * 2023-11-15 2023-12-15 华北电力大学 一种电力通信网节点编排方法、装置、设备及介质
CN117240773B (zh) * 2023-11-15 2024-02-02 华北电力大学 一种电力通信网节点编排方法、装置、设备及介质
CN117435324A (zh) * 2023-11-28 2024-01-23 江苏天好富兴数据技术有限公司 基于容器化的任务调度方法
CN117435324B (zh) * 2023-11-28 2024-05-28 江苏天好富兴数据技术有限公司 基于容器化的任务调度方法
CN118093209A (zh) * 2024-04-26 2024-05-28 银河麒麟软件(长沙)有限公司 在离线混部动态调整应用优先级配置方法及装置
CN118276792A (zh) * 2024-06-04 2024-07-02 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) 一种Kubernetes卷资源自动清理方法及系统
CN118349336A (zh) * 2024-06-18 2024-07-16 济南浪潮数据技术有限公司 云计算平台中任务处理的方法、装置、设备、介质及产品

Also Published As

Publication number Publication date
CN114840304B (zh) 2023-03-21
CN114840304A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2023198061A1 (zh) 一种容器调度方法、电子设备和存储介质
US20200174844A1 (en) System and method for resource partitioning in distributed computing
CN108519911A (zh) 一种基于容器的集群管理系统中资源的调度方法和装置
CN105892996A (zh) 一种批量数据处理的流水线作业方法及装置
WO2024021489A1 (zh) 一种任务调度方法、装置及Kubernetes调度器
US20230266999A1 (en) Resource scheduling method, resource scheduling system, and device
CN103491024A (zh) 一种面向流式数据的作业调度方法及装置
CN116627661B (zh) 算力资源调度的方法和系统
CN113391914A (zh) 任务调度方法和装置
CN115242598A (zh) 一种云操作系统部署方法及装置
US20230037293A1 (en) Systems and methods of hybrid centralized distributive scheduling on shared physical hosts
CN113672391A (zh) 一种基于Kubernetes的并行计算任务调度方法与系统
CN113886069A (zh) 一种资源分配方法、装置、电子设备及存储介质
CN115391023A (zh) 多任务容器集群的计算资源优化方法及装置
CN112988383A (zh) 一种资源分配方法、装置、设备以及存储介质
CN109271236A (zh) 一种业务调度的方法、装置、计算机存储介质及终端
CN114968565A (zh) 资源管理方法、装置、电子设备、存储介质及服务器
CN115665157B (zh) 一种基于应用资源类型的均衡调度方法和系统
US9110823B2 (en) Adaptive and prioritized replication scheduling in storage clusters
CN116483547A (zh) 资源调度方法、装置、计算机设备和存储介质
CN114416349A (zh) 资源分配方法、装置、设备、存储介质以及程序产品
CN114489970A (zh) Kubernetes中利用Coscheduling插件实现队列排序的方法及系统
CN110908791B (zh) 调度方法、调度装置和调度系统
JP2012038275A (ja) 取引計算シミュレーションシステム、方法及びプログラム
CN117421109B (zh) 训练任务的调度方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787704

Country of ref document: EP

Kind code of ref document: A1