CN111488206A - Deep learning task scheduling method, system, terminal and storage medium - Google Patents

Deep learning task scheduling method, system, terminal and storage medium Download PDF

Info

Publication number
CN111488206A
CN111488206A CN202010154800.4A CN202010154800A CN111488206A CN 111488206 A CN111488206 A CN 111488206A CN 202010154800 A CN202010154800 A CN 202010154800A CN 111488206 A CN111488206 A CN 111488206A
Authority
CN
China
Prior art keywords
task
tasks
user
project
polling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010154800.4A
Other languages
Chinese (zh)
Inventor
刘晓健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010154800.4A priority Critical patent/CN111488206A/en
Publication of CN111488206A publication Critical patent/CN111488206A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention provides a deep learning task scheduling method, a system, a terminal and a storage medium, comprising the following steps: creating project groups and dividing users into corresponding project groups; distributing the user tasks to task queues of project groups to which the users belong; generating a task execution sequence in the task queue according to the user level and the parameters of the task; and polling the task queues of the project groups, and executing the tasks of the accessed task queues according to the task execution sequence. The invention ensures that the tasks in different queues are not influenced by the Priority of each other, and the deep learning task can be completed more efficiently. So as not to make the task with low priority in the queuing state for a long time.

Description

Deep learning task scheduling method, system, terminal and storage medium
Technical Field
The invention relates to the technical field of deep learning, in particular to a deep learning task scheduling method, a deep learning task scheduling system, a deep learning task scheduling terminal and a storage medium.
Background
With the development of technology, the deep learning field is widely valued by people, and a scheduling algorithm is the core of a deep learning system and plays a decisive factor for whether a deep learning task can be completed quickly and efficiently and whether computing resources are reasonably utilized.
The most common scheduling algorithm is a deep learning task scheduling method based on Job priority, and a task with high priority is enabled to run preferentially by queuing training task identification priorities. However, the use of this scheduling method only leads to a problem that since the algorithm is selected each time by comparing the priorities, the task with the low priority flag may not be able to run the resource for a long time.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method, a system, a terminal and a storage medium for scheduling deep learning tasks, so as to solve the above-mentioned technical problems.
In a first aspect, the present invention provides a deep learning task scheduling method, including:
creating project groups and dividing users into corresponding project groups;
distributing the user tasks to task queues of project groups to which the users belong;
generating a task execution sequence in the task queue according to the user level and the parameters of the task;
and polling the task queues of the project groups, and executing the tasks of the accessed task queues according to the task execution sequence.
Further, the creating a project group and dividing users into corresponding project groups includes:
and dividing the newly added user into corresponding item groups according to the items to which the tasks required to be executed by the newly added user belong.
Further, the generating of the task execution sequence in the task queue according to the user level and the parameters of the task itself includes:
acquiring a user level to which a task in a task queue belongs;
sequencing the corresponding tasks from first to last according to the user level from high to low;
if the user levels are the same, calculating the resource occupation ratio of the tasks in the same level, and arranging the tasks with smaller resource occupation ratio before the tasks with larger resource occupation ratio in the same level;
if the resource occupation ratios of the tasks with the same user level are the same, the tasks with the earlier creation time are ranked in front.
Further, the polling task queue of the project group includes:
setting polling resources corresponding to each project group;
polling all project groups and allocating corresponding polling resources to the accessed project group task queue.
In a second aspect, the present invention provides a deep learning task scheduling system, including:
the user dividing unit is configured for creating project groups and dividing users into corresponding project groups;
the task allocation unit is configured for allocating the user tasks to the task queues of the project groups to which the users belong;
the sequence generating unit is configured for generating a task execution sequence in the task queue according to the user level and the parameters of the task;
and the queue polling unit is configured for polling the task queues of the project groups and executing the tasks of the accessed task queues according to the task execution sequence.
Further, the user dividing unit includes:
and the item matching module is configured for dividing the newly added user into corresponding item groups according to the items to which the tasks required to be executed by the newly added user belong.
Further, the order generation unit includes:
the level acquisition module is configured to acquire the user level of the task in the task queue;
the task ordering module is configured to order the corresponding tasks from first to last according to the user level from high to low;
the resource sequencing module is configured and used for calculating the resource occupation ratio of the tasks in the same level and sequencing the tasks with smaller resource occupation ratio before the tasks with larger resource occupation ratio in the same level if the user levels are the same;
and the time sequencing module is configured for arranging the tasks with earlier creation time in front if the resource occupation ratios of the tasks with the same user level are also the same.
Further, the queue polling unit includes:
the resource setting module is used for setting polling resources corresponding to each project group;
and the resource issuing module is configured to poll all the project groups and allocate corresponding polling resources to the accessed project group task queues.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
The beneficial effect of the invention is that,
according to the deep learning task scheduling method, the deep learning task scheduling system, the terminal and the storage medium, queuing is performed through the Priority identifiers of the group queues and the Job, polling scheduling is achieved, tasks in different queues are not affected by the Priority priorities of the tasks, and the deep learning tasks can be completed more efficiently. So as not to make the task with low priority in the queuing state for a long time. The invention ensures that the tasks in different queues are not influenced by the Priority of each other, and the deep learning task can be completed more efficiently. So as not to make the task with low priority in the queuing state for a long time.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention.
FIG. 2 is a schematic block diagram of a system of one embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention. The execution subject in fig. 1 may be a deep learning task scheduling system.
As shown in fig. 1, the method 100 includes:
step 110, creating project groups and dividing users into corresponding project groups;
step 120, distributing the user tasks to task queues of project groups to which the users belong;
step 130, generating a task execution sequence in the task queue according to the user level and the parameters of the task;
and step 140, polling the task queues of the project groups, and executing the tasks of the accessed task queues according to the task execution sequence.
In order to facilitate understanding of the present invention, the deep learning task scheduling method provided by the present invention is further described below with reference to the principle of the deep learning task scheduling method of the present invention and the process of scheduling the deep learning task in the embodiment.
Specifically, the deep learning task scheduling method includes:
and S1, creating project groups and dividing users under the corresponding project groups.
A plurality of project groups are created according to projects docked by a platform, the project groups are created at the same time when k8s (Kubernets are container cluster management systems and are open-source platforms, functions of automatic deployment, automatic expansion and contraction, maintenance and the like of container clusters) are achieved, a queue (task queue) is created at the bottom layer, and different queue information can be obtained by a kubutect get queue command at the bottom layer. The present embodiment creates two project groups, project group 1 and project group 2, and establishes two task queues queue1 and queue 2.
And S2, distributing the user tasks to the task queue of the project group to which the user belongs.
The users 1/2 are created into different user groups, and users can be randomly assigned when being assigned to the project groups, or assigned according to the project orientation related to the user tasks.
And S3, generating a task execution sequence in the task queue according to the user level and the parameters of the task.
The task queue1 and the task queue2 have the same task ordering method, and the queue1 is taken as an example for explanation:
user priorities (low-level, medium-level and high-level) are calibrated when the user is created, and the priorities of all tasks of the calibrated user are all user levels. The queue is queued with the task priority high, e.g., user1 creates task job1 for high priority job users and user2 creates task job2 for low priority users. Then jobs 1 is queued ahead of jobs 2 in queue1 and scheduled preferentially.
When the task priorities are the same, the tasks are required to compare the resource occupation ratio, and the comparison algorithm is as follows:
Figure BDA0002403702440000061
Figure BDA0002403702440000062
Figure BDA0002403702440000063
wherein, the GPUdistributionFor GPU resource proportion, JobGPUrequestClusterGPU, a GPU resource requested for a taskallocatableGPU resources allocable for the cluster; memory devicedistributionJobmemory for memory resource proportionrequestMemory resources, Cluster memory, requested for a taskallocatableMemory resources allocable for the cluster; CPU (central processing unit)distributionJobCPU for CPU resource occupationrequestCPU resources requested for tasks, ClusterCPUallocatableCPU resources allocable for the cluster.
And respectively calculating the percentage of the request resources of the task GPU, the memory and the CPU and the percentage of the distributable resources of the cluster. (note: the request resource request is the resource that the task needs to use when establishing the task, and the cluster allocable resource is the sum of all the nodes available resource in the k8s cluster) calculating three values, and then taking the maximum value of the three percentage values as the task resource ratio. Different tasks are compared with each other through task resource ratio, and the tasks with the small task resource ratio are arranged in front of the queue.
And if the task resource occupation ratios of the tasks are the same, comparing the task creation time and creating the priority scheduling with earlier time.
And S4, polling the task queue of the project group, and executing the tasks of the accessed task queue according to the task execution sequence.
The k8s scheduler selects job1 from queue1, selects job1 from queue2, polls to queue1 and queue2, selects job2 from queue1 and selects job2 from queue2 until the task runs to the end by querying the queue information (queried to queue1 and queue2) of the queued tasks at the bottom layer. The tasks in different queues in the process are not affected by the Priority of each other. Such as: task job2 in queue1 is a task created by a low priority user, task job2 in queue2 is a task created by a high priority user, and when polling queue of queue1, task job2 in queue1 is still executed first.
If the importance of the project groups is different, more resources can be released to the important project group at each polling, for example, the queue1 is a queue of the important project group, after the queue1 and the queue2 are queried, the job1, the job2 and the job1 are respectively selected from the queue1 and the queue2, so that each polling time, the resources into which the task queue1 of the important project group is divided can process two tasks, while the resources into which the task queue2 of the non-important project group is divided can process one task, that is, the task processing rate of the task queue of the important project group is twice that of the task queues of the other project groups.
As shown in fig. 2, the system 200 includes:
a user dividing unit 210 configured to create a project group and divide users into corresponding project groups;
a task allocation unit 220 configured to allocate a user task to a task queue of an item group to which a user belongs;
a sequence generating unit 230 configured to generate a task execution sequence in the task queue according to the user level and the parameters of the task;
and the queue polling unit 240 is configured to poll the task queues of the project groups and execute the tasks of the accessed task queues according to the task execution sequence.
Optionally, as an embodiment of the present invention, the user dividing unit includes:
and the item matching module is configured for dividing the newly added user into corresponding item groups according to the items to which the tasks required to be executed by the newly added user belong.
Optionally, as an embodiment of the present invention, the order generating unit includes:
the level acquisition module is configured to acquire the user level of the task in the task queue;
the task ordering module is configured to order the corresponding tasks from first to last according to the user level from high to low;
the resource sequencing module is configured and used for calculating the resource occupation ratio of the tasks in the same level and sequencing the tasks with smaller resource occupation ratio before the tasks with larger resource occupation ratio in the same level if the user levels are the same;
and the time sequencing module is configured for arranging the tasks with earlier creation time in front if the resource occupation ratios of the tasks with the same user level are also the same.
Optionally, as an embodiment of the present invention, the queue polling unit includes:
the resource setting module is used for setting polling resources corresponding to each project group;
and the resource issuing module is configured to poll all the project groups and allocate corresponding polling resources to the accessed project group task queues.
Fig. 3 is a schematic structural diagram of a terminal system 300 according to an embodiment of the present invention, where the terminal system 300 may be used to execute the deep learning task scheduling method according to the embodiment of the present invention.
The terminal system 300 may include: a processor 310, a memory 320, and a communication unit 330. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 320 may be used for storing instructions executed by the processor 310, and the memory 320 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in memory 320, when executed by processor 310, enable terminal 300 to perform some or all of the steps in the method embodiments described below.
The processor 310 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 310 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 330, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the invention queues through the grouping queue and the Job Priority mark to realize polling scheduling, so that the tasks in different queues are not influenced by the Priority of each other, and the deep learning task can be completed more efficiently. So as not to make the task with low priority in the queuing state for a long time. The invention ensures that the tasks in different queues are not influenced by the Priority of each other, and the deep learning task can be completed more efficiently. The task with the low priority is not caused to be in the queuing state for a long time, and the technical effect achieved by the embodiment can be referred to the above description, and is not described herein again.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A deep learning task scheduling method is characterized by comprising the following steps:
creating project groups and dividing users into corresponding project groups;
distributing the user tasks to task queues of project groups to which the users belong;
generating a task execution sequence in the task queue according to the user level and the parameters of the task;
and polling the task queues of the project groups, and executing the tasks of the accessed task queues according to the task execution sequence.
2. The method of claim 1, wherein creating project groups and grouping users under respective project groups comprises:
and dividing the newly added user into corresponding item groups according to the items to which the tasks required to be executed by the newly added user belong.
3. The method according to claim 1, wherein the generating of the task execution sequence in the task queue according to the user level and the parameters of the task itself comprises:
acquiring a user level to which a task in a task queue belongs;
sequencing the corresponding tasks from first to last according to the user level from high to low;
if the user levels are the same, calculating the resource occupation ratio of the tasks in the same level, and arranging the tasks with smaller resource occupation ratio before the tasks with larger resource occupation ratio in the same level;
if the resource occupation ratios of the tasks with the same user level are the same, the tasks with the earlier creation time are ranked in front.
4. The method of claim 1, wherein polling a task queue of a project group comprises:
setting polling resources corresponding to each project group;
polling all project groups and allocating corresponding polling resources to the accessed project group task queue.
5. A deep learning task scheduling system, comprising:
the user dividing unit is configured for creating project groups and dividing users into corresponding project groups;
the task allocation unit is configured for allocating the user tasks to the task queues of the project groups to which the users belong;
the sequence generating unit is configured for generating a task execution sequence in the task queue according to the user level and the parameters of the task;
and the queue polling unit is configured for polling the task queues of the project groups and executing the tasks of the accessed task queues according to the task execution sequence.
6. The system of claim 5, wherein the user partition unit comprises:
and the item matching module is configured for dividing the newly added user into corresponding item groups according to the items to which the tasks required to be executed by the newly added user belong.
7. The system of claim 5, wherein the order generation unit comprises:
the level acquisition module is configured to acquire the user level of the task in the task queue;
the task ordering module is configured to order the corresponding tasks from first to last according to the user level from high to low;
the resource sequencing module is configured and used for calculating the resource occupation ratio of the tasks in the same level and sequencing the tasks with smaller resource occupation ratio before the tasks with larger resource occupation ratio in the same level if the user levels are the same;
and the time sequencing module is configured for arranging the tasks with earlier creation time in front if the resource occupation ratios of the tasks with the same user level are also the same.
8. The system of claim 5, wherein the queue polling unit comprises:
the resource setting module is used for setting polling resources corresponding to each project group;
and the resource issuing module is configured to poll all the project groups and allocate corresponding polling resources to the accessed project group task queues.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-4.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN202010154800.4A 2020-03-08 2020-03-08 Deep learning task scheduling method, system, terminal and storage medium Withdrawn CN111488206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010154800.4A CN111488206A (en) 2020-03-08 2020-03-08 Deep learning task scheduling method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010154800.4A CN111488206A (en) 2020-03-08 2020-03-08 Deep learning task scheduling method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN111488206A true CN111488206A (en) 2020-08-04

Family

ID=71812463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010154800.4A Withdrawn CN111488206A (en) 2020-03-08 2020-03-08 Deep learning task scheduling method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111488206A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419833A (en) * 2021-06-24 2021-09-21 中国信息通信研究院 Method and device for quantum cloud computing platform task scheduling and quantum cloud computing platform task scheduling server
CN115328640A (en) * 2022-10-17 2022-11-11 广州数说故事信息科技有限公司 Task scheduling method, device and system and computer readable storage medium
CN116501506A (en) * 2023-06-27 2023-07-28 苏州仰思坪半导体有限公司 Resource polling arbitration method, device, medium and computing equipment
WO2024021489A1 (en) * 2022-07-29 2024-02-01 天翼云科技有限公司 Task scheduling method and apparatus, and kubernetes scheduler

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8881161B1 (en) * 2010-01-28 2014-11-04 Applied Micro Circuits Corporation Operating system with hardware-enabled task manager for offloading CPU task scheduling
WO2020000944A1 (en) * 2018-06-25 2020-01-02 星环信息科技(上海)有限公司 Preemptive scheduling based resource sharing use method, system and
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8881161B1 (en) * 2010-01-28 2014-11-04 Applied Micro Circuits Corporation Operating system with hardware-enabled task manager for offloading CPU task scheduling
WO2020000944A1 (en) * 2018-06-25 2020-01-02 星环信息科技(上海)有限公司 Preemptive scheduling based resource sharing use method, system and
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419833A (en) * 2021-06-24 2021-09-21 中国信息通信研究院 Method and device for quantum cloud computing platform task scheduling and quantum cloud computing platform task scheduling server
CN113419833B (en) * 2021-06-24 2023-12-29 中国信息通信研究院 Method and device for task scheduling of quantum cloud computing platform and task scheduling server of quantum cloud computing platform
WO2024021489A1 (en) * 2022-07-29 2024-02-01 天翼云科技有限公司 Task scheduling method and apparatus, and kubernetes scheduler
CN115328640A (en) * 2022-10-17 2022-11-11 广州数说故事信息科技有限公司 Task scheduling method, device and system and computer readable storage medium
CN116501506A (en) * 2023-06-27 2023-07-28 苏州仰思坪半导体有限公司 Resource polling arbitration method, device, medium and computing equipment
CN116501506B (en) * 2023-06-27 2023-09-12 苏州仰思坪半导体有限公司 Resource polling arbitration method, device, medium and computing equipment

Similar Documents

Publication Publication Date Title
CN111488206A (en) Deep learning task scheduling method, system, terminal and storage medium
CN107688492B (en) Resource control method and device and cluster resource management system
CN112000463B (en) GPU resource allocation method, system, terminal and storage medium based on CUDA
CN112272203B (en) Cluster service node selection method, system, terminal and storage medium
CN111966500A (en) Resource scheduling method and device, electronic equipment and storage medium
US11496413B2 (en) Allocating cloud computing resources in a cloud computing environment based on user predictability
CN111158852A (en) Training resource dynamic allocation method, system, terminal and storage medium
CN112269641A (en) Scheduling method, scheduling device, electronic equipment and storage medium
CN107682391B (en) Electronic device, server allocation control method, and computer-readable storage medium
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN113238848A (en) Task scheduling method and device, computer equipment and storage medium
CN112783659A (en) Resource allocation method and device, computer equipment and storage medium
CN105592110A (en) Resource scheduling method and device
CN111193802A (en) Dynamic resource allocation method, system, terminal and storage medium based on user group
CN114968565A (en) Resource management method, device, electronic equipment, storage medium and server
CN111597044A (en) Task scheduling method and device, storage medium and electronic equipment
CN114629960A (en) Resource scheduling method, device, system, device, medium, and program product
CN112073532B (en) Resource allocation method and device
CN111475251A (en) Cluster container scheduling method, system, terminal and storage medium
Singh et al. Scheduling algorithm with load balancing in cloud computing
CN112463376A (en) Resource allocation method and device
CN111796934B (en) Task issuing method and device, storage medium and electronic equipment
CN112073498B (en) Resource allocation method and device
WO2017133421A1 (en) Method and device for sharing resources among multiple tenants
CN114489978A (en) Resource scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200804