CN117788261A - GPU computing resource scheduling method, device, equipment and storage medium - Google Patents

GPU computing resource scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN117788261A
CN117788261A CN202311807725.7A CN202311807725A CN117788261A CN 117788261 A CN117788261 A CN 117788261A CN 202311807725 A CN202311807725 A CN 202311807725A CN 117788261 A CN117788261 A CN 117788261A
Authority
CN
China
Prior art keywords
task
cluster
resource
computing resource
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311807725.7A
Other languages
Chinese (zh)
Inventor
骆训浩
王振杰
刘俊涛
王元斌
周博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Shipbuilding Zhihai Innovation Research Institute Co ltd
Original Assignee
China Shipbuilding Zhihai Innovation Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Shipbuilding Zhihai Innovation Research Institute Co ltd filed Critical China Shipbuilding Zhihai Innovation Research Institute Co ltd
Priority to CN202311807725.7A priority Critical patent/CN117788261A/en
Publication of CN117788261A publication Critical patent/CN117788261A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for scheduling GPU computing resources, wherein the method for scheduling GPU computing resources comprises the following steps: the scheduler acquires cluster computing resource information and performs global resource scheduling on tasks according to the fairness principle according to the cluster computing resource information; the dispatcher monitors the cluster state in real time and adjusts the resource allocation parameters in a self-adaptive manner according to the cluster state. The method and the device can effectively improve the utilization rate of GPU computing resources.

Description

GPU computing resource scheduling method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of cluster computing resources, and in particular, to a method, an apparatus, a device, and a storage medium for scheduling GPU computing resources.
Background
With the development of deep learning technology, demands for computing resources such as GPUs are increasing, and in a computing power cluster, the GPU computing resources are precious, and a general resource scheduling strategy is to allocate a GPU computing card for each task independently. And in a further scheduling mode, the GPU is virtualized on the container level by adopting a GPU virtualization technology, and the GPU resource allocation is manually carried out by adopting a sharing mode during scheduling. When the conventional general K8S cluster performs GPU computing resource scheduling, the task characteristics are not optimized, so that the cluster resource utilization rate is low. Therefore, the conventional computing resource scheduling method cannot maximize the utilization of the computing card, and is easy to cause the waste of computing resources.
Therefore, the inventor provides a method, a device, equipment and a storage medium for scheduling GPU computing resources.
Disclosure of Invention
(1) Technical problem to be solved
The embodiment of the application provides a GPU computing resource scheduling method, device, equipment and storage medium, which aims to solve the technical problems that: the traditional computing resource scheduling method cannot maximally utilize the computing card, and is easy to cause the waste of computing resources.
(2) Technical proposal
In a first aspect, an embodiment of the present application provides a GPU computing resource scheduling method, including:
the scheduler acquires cluster computing resource information and performs global resource scheduling on tasks according to the fairness principle according to the cluster computing resource information;
the dispatcher monitors the cluster state in real time and adjusts the resource allocation parameters in a self-adaptive manner according to the cluster state.
In one embodiment, the scheduler obtains cluster computing resource information, and performs global resource scheduling on tasks according to fairness principles according to the cluster computing resource information, including:
the method comprises the steps that a scheduler obtains cluster computing resource information and sends the cluster computing resource information to a task queue;
the task queue performs resource pre-allocation on each task according to the cluster computing resource information to obtain an initial allocation scheme;
and the scheduler performs global resource scheduling on the tasks according to the initial allocation scheme and the fairness principle.
In one embodiment, the task queue performs resource pre-allocation on each task according to the cluster computing resource information to obtain an initial allocation scheme, including:
and the task queue performs resource pre-allocation on each task according to the cluster computing resource information, the data fragments required by the computing task and the storage node where the data fragments are located, and an initial allocation scheme is obtained.
In one embodiment, the scheduler performs global resource scheduling on the task according to the initial allocation scheme and the fairness principle, including:
the scheduler obtains the task and a resource scheduling table required by the task, calculates the consumption cost of the computing resources required by all jobs for completing the task, and outputs a group of mapping of the task and GPU resources, wherein the mapping can enable the task to be successfully scheduled and the cost is minimum.
In one embodiment, after the scheduler performs global resource scheduling on the task according to the fairness principle according to the initial allocation scheme, the method further includes:
for each scheduled task, judging whether the resources are available, and if not, re-distributing the resources to the tasks by the task queue.
In one embodiment, the scheduler monitors the cluster state in real time, and adaptively adjusts the resource allocation parameter according to the cluster state, including:
the scheduler monitors the cluster state in real time, and if the resource utilization rate is lower than a preset value, pauses all tasks and adaptively adjusts resource allocation parameters;
restarting the task, and continuing calculation at the interrupted point of the task pause.
In one embodiment, the scheduler monitors the cluster state in real time, and if the resource utilization rate is lower than a preset value, pauses all tasks and adaptively adjusts the resource allocation parameters, further includes:
judging whether the task can be scheduled under the new resource parameters, restarting the task if the task can be scheduled under the new resource parameters, and otherwise, readjusting the resource allocation parameters.
In a second aspect, an embodiment of the present application provides a GPU computing resource scheduling device, including:
the global scheduling module is used for acquiring cluster computing resource information by a scheduler and performing global resource scheduling on tasks according to fairness principles according to the cluster computing resource information;
and the parameter adjustment module is used for monitoring the cluster state in real time by the scheduler and adaptively adjusting the resource allocation parameters according to the cluster state.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the GPU computing resource scheduling method as described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, which when executed by a processor implements a GPU computing resource scheduling method as described above.
(3) Advantageous effects
The technical scheme of the application has the following advantages:
according to the GPU computing resource scheduling method provided by the first aspect of the embodiment of the application, two-stage scheduling is adopted, the first stage ensures that the computing resource utilization rate is maximum under the condition of scheduling the most tasks, and the second stage can perform intelligent resource scheduling according to the task characteristics and the self-adaptive adjustment parameters, so that the effect of effectively improving the GPU computing resource utilization rate is achieved.
It will be appreciated that the advantages of the second, third and fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for scheduling GPU computing resources;
fig. 2 is a schematic structural diagram of a GPU computing resource scheduling device provided in the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. "plurality" means "two or more".
The detailed description of the present application is further described in detail below with reference to the drawings and examples. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
As shown in fig. 1, the GPU computing resource scheduling method provided in this embodiment includes:
and S100, a scheduler acquires cluster computing resource information, and performs global resource scheduling on tasks according to fairness principles according to the cluster computing resource information.
In one embodiment, the scheduler obtains cluster computing resource information, and performs global resource scheduling on tasks according to fairness principles according to the cluster computing resource information, including: the method comprises the steps that a scheduler obtains cluster computing resource information and sends the cluster computing resource information to a task queue; the task queue performs resource pre-allocation on each task according to the cluster computing resource information to obtain an initial allocation scheme; and the scheduler performs global resource scheduling on the tasks according to the initial allocation scheme and the fairness principle.
In one embodiment, the task queue performs resource pre-allocation on each task according to the cluster computing resource information to obtain an initial allocation scheme, including: and the task queue performs resource pre-allocation on each task according to the cluster computing resource information, the data fragments required by the computing task and the storage node where the data fragments are located, and an initial allocation scheme is obtained.
In one embodiment, the scheduler performs global resource scheduling on the tasks according to the initial allocation scheme and the fairness principle, including: the scheduler obtains the task and a resource scheduling table required by the task, calculates the consumption cost of the computing resources required by all jobs for completing the task, and outputs a group of mapping of the task and GPU resources, wherein the mapping can enable the task to be successfully scheduled and the cost is minimum.
In one embodiment, after the scheduler performs global resource scheduling on the task according to the fairness principle according to the initial allocation scheme, the method further includes: for each scheduled task, judging whether the resources are available, and if not, re-distributing the resources to the tasks by the task queue.
In application, the GPU computing resource scheduling method needs to be used for two components, namely a scheduler and a task queue, wherein the scheduler is used for resource allocation, and the task queue is used for computing task queuing. The scheduler collects computing resource information of the cluster computing nodes, comprises CPU, memory and GPU resource state information, provides all available computing resource information for the task queues, and specifically broadcasts the resource information to the task queues. After the task queue receives the cluster computing resource information, the resource is pre-allocated to each task, and each task has optimal scheduling resources, specifically, the computing resources can be allocated to the tasks according to the data fragments required by the computing task and the storage nodes where the data fragments are located, an initial resource allocation scheme is given, the allocation scheme is reported to a scheduler, and the content of the resource allocation scheme comprises the tasks, the resource size and the computing nodes. After receiving the initial allocation scheme, the scheduler can perform global resource scheduling according to a greedy algorithm, namely, perform global adjustment according to a fairness principle, so as to obtain a global resource allocation scheme. And carrying out resource scheduling on each operable task, judging whether resources are available for each scheduled task, putting the inoperable task into a waiting queue, and carrying out resource allocation again on the task queue until no more tasks exist in the task waiting queue, and if all the tasks can be indicated to be scheduled.
The global resource scheduling method adopts a fairness principle to schedule, and prevents one task from occupying a large number of GPU devices to execute a large number of tasks, thereby causing other tasks to have no available GPU resources for scheduling. Meanwhile, when a calculation task is scheduled, a data storage node of the calculation task needs to be considered, and GPU resources of the node where the task data is stored are selected as much as possible. And the resource allocation logic based on the quantity of the GPUs outputs the mapping of the tasks and the GPU resources according to the input task list and the GPU cluster resource pool. Tool withThe body process comprises the following steps: the resource manager provides resources for the task and initializes the GPU resource pool G= { G 1 ,g 2 ,...,g n Initializing task queue j= { J } 1 ,j 2 ,...,j n Traversing task queues, calculating the quantity of GPU resources used by each task, and putting the GPU resources into a set A. Calculating task j with minimum amount of used resources i Acquiring a resource schedule required by the task, and transmitting the resource schedule to the task j i . Task scheduler receives task j i And a resource schedule, starting an outer loop, i.e. traversing all resources in resource pool G, and an inner loop, i.e. traversing task j i If the computing resource g which is larger than the computing resource required by the job t exists in the resource pool, computing the computing resource consumption cost required by the completion job, ending the inner loop and the outer loop, and returning to enable the job j to be realized i The scheduling is successful, and the mapping of a group of tasks with minimum calculation cost and GPU resources is calculated.
And S200, the dispatcher monitors the cluster state in real time and adjusts the resource allocation parameters in a self-adaptive manner according to the cluster state.
In one embodiment, the scheduler monitors the cluster state in real time and adjusts the resource allocation parameters adaptively according to the cluster state, including: the scheduler monitors the cluster state in real time, and if the resource utilization rate is lower than a preset value, pauses all tasks and adaptively adjusts resource allocation parameters; restarting the task, and continuing calculation at the interrupted point of the task pause.
In one embodiment, the scheduler monitors the cluster state in real time, and if the resource utilization is lower than a preset value, after suspending all tasks and adaptively adjusting the resource allocation parameters, the method further includes: judging whether the task can be scheduled under the new resource parameters, restarting the task if the task can be scheduled under the new resource parameters, and otherwise, readjusting the resource allocation parameters.
In the application, when the task runs, the scheduler monitors the cluster state in real time, pauses all tasks when the cluster computing resource utilization rate is lower than a preset value such as 40%, adaptively increases according to the resource parameter distributed last time, adjusts the resource parameter, judges whether the task can be scheduled under the new resource parameter, restarts the task and carries out breakpoint continuous calculation if yes, namely carries out continuous calculation at the interrupted point of the task pause, returns the readjustment parameter if the scheduling fails, and successfully carries out the task continuously after the scheduling.
According to the GPU computing resource scheduling method, two-stage scheduling is adopted, the maximum computing resource utilization rate is guaranteed in the first stage under the condition of scheduling the most tasks, and the intelligent scheduling of resources is carried out according to the task characteristic self-adaptive adjustment parameters when the computing resource utilization rate is reduced in the second stage, so that the effect of effectively improving the GPU computing resource utilization rate is achieved.
Corresponding to the GPU computing resource scheduling method described in the above embodiments, as shown in fig. 2, the present embodiment provides a GPU computing resource scheduling device, where the GPU computing resource scheduling device 200 includes:
the global scheduling module 201 is configured to obtain cluster computing resource information by using a scheduler, and perform global resource scheduling on a task according to a fairness principle according to the cluster computing resource information;
the parameter adjustment module 202 is configured to monitor the cluster state in real time by the scheduler, and adaptively adjust the resource allocation parameter according to the cluster state.
It should be noted that, because the content of information interaction and execution process between the modules/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and details thereof are not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides an electronic device 300, as shown in fig. 3, including a memory 301, a processor 302, and a computer program 303 stored in the memory 301 and executable on the processor 302, where the processor 302 implements the steps of the GPU computing resource scheduling method provided in the first aspect when executing the computer program 303.
In application, the electronic device may include, but is not limited to, a processor and a memory, fig. 3 is merely an example of an electronic device and does not constitute limitation of an electronic device, and may include more or less components than illustrated, or combine certain components, or different components, such as an input-output device, a network access device, etc. The input output devices may include cameras, audio acquisition/playback devices, display screens, and the like. The network access device may include a network module for wireless networking with an external device.
In application, the processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In applications, the memory may in some embodiments be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory may in other embodiments also be an external storage device of the electronic device, for example a plug-in hard disk provided on the electronic device, a Smart Media Card (SMC), a secure digital (SecureDigital, SD) Card, a Flash Card (Flash Card), etc. The memory may also include both internal storage units and external storage devices of the electronic device. The memory is used to store an operating system, application programs, boot Loader (Boot Loader), data, and other programs, etc., such as program code for a computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program can implement the steps in the above-mentioned method embodiments when executed by a processor.
All or part of the process in the method of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and which, when executed by a processor, implements the steps of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an electronic device, a recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method for scheduling GPU computing resources, comprising:
the scheduler acquires cluster computing resource information and performs global resource scheduling on tasks according to the fairness principle according to the cluster computing resource information;
the dispatcher monitors the cluster state in real time and adjusts the resource allocation parameters in a self-adaptive manner according to the cluster state.
2. The GPU computing resource scheduling method of claim 1, wherein the scheduler obtains cluster computing resource information and performs global resource scheduling on tasks according to fairness principles according to the cluster computing resource information, comprising:
the method comprises the steps that a scheduler obtains cluster computing resource information and sends the cluster computing resource information to a task queue;
the task queue performs resource pre-allocation on each task according to the cluster computing resource information to obtain an initial allocation scheme;
and the scheduler performs global resource scheduling on the tasks according to the initial allocation scheme and the fairness principle.
3. The GPU computing resource scheduling method of claim 2, wherein the task queue performs resource pre-allocation for each task according to the cluster computing resource information to obtain an initial allocation scheme, comprising:
and the task queue performs resource pre-allocation on each task according to the cluster computing resource information, the data fragments required by the computing task and the storage node where the data fragments are located, and an initial allocation scheme is obtained.
4. The GPU computing resource scheduling method of claim 2, wherein the scheduler performs global resource scheduling on tasks according to a fairness principle according to the initial allocation scheme, comprising:
the scheduler obtains the task and a resource scheduling table required by the task, calculates the consumption cost of the computing resources required by all jobs for completing the task, and outputs a group of mapping of the task and GPU resources, wherein the mapping can enable the task to be successfully scheduled and the cost is minimum.
5. The GPU computing resource scheduling method of claim 2, wherein the scheduler, after performing global resource scheduling on tasks according to fairness criteria according to the initial allocation scheme, further comprises:
for each scheduled task, judging whether the resources are available, and if not, re-distributing the resources to the tasks by the task queue.
6. The GPU computing resource scheduling method of claim 1, wherein the scheduler monitors cluster status in real time and adaptively adjusts resource allocation parameters according to the cluster status, comprising:
the scheduler monitors the cluster state in real time, and if the resource utilization rate is lower than a preset value, pauses all tasks and adaptively adjusts resource allocation parameters;
restarting the task, and continuing calculation at the interrupted point of the task pause.
7. The method for scheduling GPU computing resources according to claim 1, wherein the scheduler monitors the cluster status in real time, and if the resource utilization is lower than a preset value, after suspending all tasks and adaptively adjusting the resource allocation parameters, further comprises:
judging whether the task can be scheduled under the new resource parameters, restarting the task if the task can be scheduled under the new resource parameters, and otherwise, readjusting the resource allocation parameters.
8. A GPU computing resource scheduling apparatus, comprising:
the global scheduling module is used for acquiring cluster computing resource information by a scheduler and performing global resource scheduling on tasks according to fairness principles according to the cluster computing resource information;
and the parameter adjustment module is used for monitoring the cluster state in real time by the scheduler and adaptively adjusting the resource allocation parameters according to the cluster state.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the GPU computing resource scheduling method of any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements a GPU computing resource scheduling method as claimed in any of claims 1 to 7.
CN202311807725.7A 2023-12-26 2023-12-26 GPU computing resource scheduling method, device, equipment and storage medium Pending CN117788261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311807725.7A CN117788261A (en) 2023-12-26 2023-12-26 GPU computing resource scheduling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311807725.7A CN117788261A (en) 2023-12-26 2023-12-26 GPU computing resource scheduling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117788261A true CN117788261A (en) 2024-03-29

Family

ID=90382866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311807725.7A Pending CN117788261A (en) 2023-12-26 2023-12-26 GPU computing resource scheduling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117788261A (en)

Similar Documents

Publication Publication Date Title
KR101553649B1 (en) Multicore apparatus and job scheduling method thereof
CN111767134A (en) Multitask dynamic resource scheduling method
CN110413412B (en) GPU (graphics processing Unit) cluster resource allocation method and device
US8677362B2 (en) Apparatus for reconfiguring, mapping method and scheduling method in reconfigurable multi-processor system
US20110302587A1 (en) Information processing device and information processing method
CN109564528B (en) System and method for computing resource allocation in distributed computing
WO2022068697A1 (en) Task scheduling method and apparatus
US9141436B2 (en) Apparatus and method for partition scheduling for a processor with cores
CN112486642B (en) Resource scheduling method, device, electronic equipment and computer readable storage medium
CN109450803B (en) Traffic scheduling method, device and system
CN111798113A (en) Resource allocation method, device, storage medium and electronic equipment
CN115237556A (en) Scheduling method and device, chip, electronic equipment and storage medium
CN114579285A (en) Task running system and method and computing device
CN115167996A (en) Scheduling method and device, chip, electronic equipment and storage medium
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
CN116048740A (en) Task scheduling method and system based on many-core system, electronic equipment and medium
CN113010309B (en) Cluster resource scheduling method, device, storage medium, equipment and program product
CN115640113A (en) Multi-plane flexible scheduling method
CN117271096A (en) Scheduling method, electronic device, and computer-readable storage medium
CN117788261A (en) GPU computing resource scheduling method, device, equipment and storage medium
EP2413240A1 (en) Computer micro-jobs
CN112395063B (en) Dynamic multithreading scheduling method and system
CN112130974B (en) Cloud computing resource configuration method and device, electronic equipment and storage medium
CN113521753A (en) System resource adjusting method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination