CN114296934A - Method and device for allocating Yarn computing resources, computer equipment and storage medium - Google Patents

Method and device for allocating Yarn computing resources, computer equipment and storage medium Download PDF

Info

Publication number
CN114296934A
CN114296934A CN202111657174.1A CN202111657174A CN114296934A CN 114296934 A CN114296934 A CN 114296934A CN 202111657174 A CN202111657174 A CN 202111657174A CN 114296934 A CN114296934 A CN 114296934A
Authority
CN
China
Prior art keywords
current
queue
yarn
data
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111657174.1A
Other languages
Chinese (zh)
Inventor
王亚涛
王亚磊
高妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi IoT Technology Co Ltd
Original Assignee
Tianyi IoT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi IoT Technology Co Ltd filed Critical Tianyi IoT Technology Co Ltd
Priority to CN202111657174.1A priority Critical patent/CN114296934A/en
Publication of CN114296934A publication Critical patent/CN114296934A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for allocating Yarn computing resources, computer equipment and a storage medium. The method comprises the steps of obtaining the number of current tasks submitted by a data aggregation queue and the number of current tasks submitted by a data scheduling queue at fixed time, and calculating a current load factor; setting a corresponding current priority factor according to the task priority in the data aggregation queue and the data scheduling queue; and updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor. The invention provides the concepts of the load factor and the priority factor, and on the premise of meeting the principle of a scheduler, the maximum ratio of the self-adaptive distribution of different queues can be calculated according to the method, so that the method has the advantage of more reasonable resource distribution.

Description

Method and device for allocating Yarn computing resources, computer equipment and storage medium
Technical Field
The invention relates to the technical field of emerging information, in particular to a method and a device for allocating Yarn computing resources, computer equipment and a storage medium.
Background
The traditional Capacity scheduler allocates a fixed Yarn computing resource allocation value (Configured Capacity) to a data aggregation queue (data _ merge) and a data scheduling queue (data _ dev) according to experience in advance, and based on the fixed value, the service serial characteristics of the data aggregation and data scheduling and the isolated timing characteristics of a plurality of data aggregation cannot be fully considered, so that the maximum resource utilization ratio of 2 queues cannot be achieved at a proper time point.
Disclosure of Invention
The invention aims to provide a method, a device, computer equipment and a storage medium for allocating a Yarn computing resource, and aims to solve the problem that the existing scheduler cannot enable each queue to have the maximum resource utilization ratio at a proper time point.
In order to solve the technical problems, the invention aims to realize the following technical scheme: a method for allocating Yarn computing resources is provided, which comprises the following steps:
the method comprises the steps of regularly obtaining the number of current tasks submitted by a data aggregation queue and the number of current tasks submitted by a data scheduling queue, and calculating a current load factor;
setting a corresponding current priority factor according to the task priority in the data aggregation queue and the data scheduling queue;
and updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
In addition, another technical problem to be solved by the present invention is to provide an apparatus for deploying Yarn computing resources, including:
the load factor calculation unit is used for regularly acquiring the number of current tasks submitted by the data aggregation queue and the number of current tasks submitted by the data scheduling queue and calculating a current load factor;
the priority factor setting unit is used for setting a corresponding current priority factor according to the task priorities in the data aggregation queue and the data scheduling queue;
and the resource ratio updating unit is used for updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
In addition, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for deploying Yarn computing resources according to the first aspect when executing the computer program.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for deploying Yarn computing resources according to the first aspect.
The embodiment of the invention discloses a method and a device for allocating Yarn computing resources, computer equipment and a storage medium. The method comprises the steps of obtaining the number of current tasks submitted by a data aggregation queue and the number of current tasks submitted by a data scheduling queue at fixed time, and calculating a current load factor; setting a corresponding current priority factor according to the task priority in the data aggregation queue and the data scheduling queue; and updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor. The embodiment of the invention provides the concepts of the load factor and the priority factor, and on the premise of meeting the principle of a scheduler, the maximum ratio of the self-adaptive distribution of different queues can be calculated according to the method, so that the method has the advantage of more reasonable resource distribution.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for allocating Yarn computing resources according to an embodiment of the present invention;
fig. 2 is a schematic sub-flowchart of step S101 according to an embodiment of the present invention;
fig. 3 is a schematic sub-flowchart of step S202 according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of an apparatus for Yarn computing resources according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for allocating Yarn computing resources according to an embodiment of the present invention;
as shown in fig. 1, the method includes steps S101 to S103.
S101, acquiring the number of current tasks submitted by a data aggregation queue and the number of current tasks submitted by a data scheduling queue at regular time, and calculating a current load factor;
s102, setting a corresponding current priority factor according to task priorities in the data aggregation queue and the data scheduling queue;
s103, updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
For convenience in understanding, data aggregation queuing and data scheduling queuing are introduced, wherein the data aggregation queuing and the data scheduling queuing are two major links in an ETL flow and belong to two different tenant queues, and tasks meet the serial sequence of data aggregation and data scheduling, namely, the tasks of the corresponding data scheduling queuing can be automatically triggered only after the tasks of the data aggregation queues are executed; when the two queues execute corresponding tasks, the Yarn resources are allocated by the Capacity scheduler, the allocation duty ratio of the Yarn resources is the key of the queue for executing the task with high efficiency, ideally, the application requests for the Yarn resources should be satisfied immediately, but the resources are often limited in the practical situation, particularly in a busy cluster, the request for one application resource often needs to wait for a period of time to reach the corresponding resource; the invention provides a method for dynamically allocating the Yarn resource.
Specifically, the invention provides concepts of load factors and priority factors, on the premise of meeting the principle of a Capacity scheduler, the maximum ratio of self-adaptive distribution of a data aggregation queue and a data scheduling queue can be calculated according to the method, and the Yarn computing resources are refreshed at regular time by using a Yarn interface, so that the method does not cause faults to the service, is suitable for dynamic allocation of the ratio of the resources of the data aggregation queue and the data scheduling queue, is an optimization strategy of self-adaptive distribution of the Yarn resources in a serialized scene, realizes allocation of reasonable resources according to actual service requirements, and has the effects of shortening scheduling time and improving production efficiency.
Specifically, the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue is computed according to the following formula:
x+y=A(x∈[1%,A%),y∈[1%,A%));
x/y=α*β(α=3.0);
Figure BDA0003448599850000041
wherein, x represents the current Yarn computing resource ratio of the data aggregation queue, y represents the current Yarn computing resource ratio of the data scheduling queue, A represents that after the current Yarn computing resource occupied by other queues is removed, the whole cluster environment can be configured to the total resource ratio of x and y, the value range of A is not more than 100 percent, beta represents the current load factor, alpha represents the current priority factor, and beta represents the total resource ratio of betaxRepresenting the current number of tasks, β, of the data aggregation queueyIndicating the current number of tasks queued for the data schedule.
In this embodiment, the total sum of the whole Yarn computing resources is 100%, after other queues are removed from the pre-occupied resources, 90% of the resources can be set for data aggregation queuing and data scheduling queuing, that is, a is preferably 90%, and can be specifically selected according to actual conditions; the current priority factor alpha is a preset value set according to the service importance, and can be set according to a preset service level standard or manually set, the current priority factor alpha represents the priority of using the Yarn computing resource in data scheduling, the current priority factor alpha can be tested and set according to actual conditions, the priority of the data aggregation queue on service is higher than that of the data scheduling queue, the initial preference value is 3, the larger the value is, the more the calculated distribution ratio of the Yarn computing resource is biased to the data aggregation queue, and on the basis, the current priority factor alpha can be increased when the number of current tasks submitted by the data aggregation queue is larger.
Specifically, A, beta, alpha, betaxAnd betaySubstituting the value into the formula, the current Yarn computing resource ratio x of the data aggregation queue and the current Yarn computing resource ratio y of the data scheduling queue can be computed. Particularly, when the current task number of the data aggregation queue and the current task number of the data scheduling queue are obtained, the current task number of the data aggregation queue isWhen the number of the tasks is 0, the current Yarn computing resource ratio of the data aggregation queue is set to be a preset ratio, and when the number of the tasks of the data scheduling queue is 0, the current Yarn computing resource ratio of the data scheduling queue is set to be the preset ratio, wherein the preset ratio is preferably 1% and is used for maintaining the lowest computing standby requirement.
In one embodiment, as shown in fig. 2, step S101 includes:
s201, starting a corresponding task by a timing script according to the starting time of different tasks and moving the task to a data aggregation queue;
the step is used for counting the number of the current tasks submitted to the Yarn by the data aggregation queue at regular time, each task has the starting time at regular time, after the starting time of the corresponding task is reached, the task is started by the timing script and then is moved to the data aggregation queue, for example, the data aggregation queue of the current day is provided with 2 tasks (a and b) in series according to time, the time for the task a to be started regularly every day is configured to be 0:30, the time for the task b to be started regularly every day is configured to be 1:30, the number of the tasks which are acquired by using the Yarn interface command and are submitted to the Yarn and still run currently is 1 at 0:30-1:30, the number of tasks currently still running, which are submitted to Yarn, is acquired after 1:30, and based on this, after the corresponding tasks are started according to the starting time configuration of different tasks, the number of the current tasks serially submitted to Yarn in the data aggregation queue can be obtained at different moments.
S202, judging the merging end time of each task, scheduling the corresponding task according to the merging end time by the timing script, and moving the task to a data scheduling queue;
the step is used for carrying out timing statistics on the number of current tasks submitted to Yarn by a data scheduling queue, the execution time of each task in the data aggregation queue is indefinite, the execution ending time is the merging ending time only after the execution in the data aggregation queue is ended, and then the corresponding tasks are started by a timing script according to the merging ending time; then, automatically triggering a data scheduling program, moving the task to a data scheduling queue, submitting the task to Yarn by the data scheduling queue, updating the number of the current tasks submitted to Yarn by the data scheduling queue, such as the above-mentioned tasks a and b, and obtaining the number of the tasks submitted to Yarn by the data scheduling queue and still running currently by the data scheduling queue to be 1 at 1:00-2:00 by using a Yarn interface command if the merging end time of the task a is judged to be 1:00 and the merging end time of the task b is judged to be 2:00, and specifically, the process of judging the merging end time of each task is set forth in the following steps S301-S303.
S203, counting the number of the current tasks of the data aggregation queue and the number of the current tasks of the data scheduling queue according to a preset time interval, and updating the current load factor according to the ratio of the number of the current tasks of the data aggregation queue to the number of the current tasks of the data scheduling queue.
In this step, a statistic is performed once at a preset time interval, specifically, the current task number of the data aggregation queue and the current task number of the data scheduling queue can be respectively counted according to the statistic manner of steps S201-202, and then the current counted current task number of the data aggregation queue is divided by the current task number of the data scheduling queue, that is, βxyThe ratio of (d) is used as the current load factor.
In an embodiment, as shown in fig. 3, specifically describing the process of determining the merging end time of each task, step S202 includes:
s301, in a data aggregation queue, regularly scanning whether each task has a record of a corresponding warehousing mark in the mysql table, if so, entering the step S202, and otherwise, jumping to the step S203;
s302, judging the corresponding task ending time, scheduling the corresponding task by the timing script, and triggering a data scheduling program to move the task to a data scheduling queue;
and S303, waiting for next scanning.
In this embodiment, in the data aggregation queue, each task may generate a new hive partition table during execution, and the record number of the hive partition table generated by each task is scanned at regular time, or whether a corresponding partition hdfs directory accessed to the hive partition table is generated is scanned, if the record number reaches a preset record value or the partition hdfs directory is generated, a record of a corresponding task storage flag is inserted into the mysql table, that is, during the scanning at regular time, it is determined whether each task has a record of a corresponding storage flag in the mysql table, and if there is a corresponding storage flag, it is determined whether the task is finished.
And when the corresponding task is judged to be finished, triggering a data scheduling program, and moving the task to a data scheduling queue.
The embodiment of the invention also provides a device for allocating the Yarn computing resources, which is used for executing any embodiment of the method for allocating the Yarn computing resources. Specifically, referring to fig. 4, fig. 4 is a schematic block diagram of an apparatus for deploying Yarn computing resources according to an embodiment of the present invention.
As shown in fig. 4, the apparatus 400 for deploying Yarn computing resources includes: a load factor calculation unit 401, a priority factor setting unit 402, and a resource allocation update unit 403.
A load factor calculation unit 401, configured to periodically obtain the number of current tasks submitted by the data aggregation queue and the number of current tasks submitted by the data scheduling queue, and calculate a current load factor;
a priority factor setting unit 402, configured to set a corresponding current priority factor according to task priorities in the data aggregation queue and the data scheduling queue;
and a resource ratio updating unit 403, configured to update the current Yarn calculation resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
The device fully considers the service serial characteristics among the queues and the isolated characteristics in the queue tasks, and dynamically refreshes the YRAN computing resources of the data aggregation queues and the data scheduling queues, so that the current Yarn computing resource ratio of each is dynamically allocated under the condition that the sum of the YRAN computing resources of the data aggregation queues and the data scheduling queues is certain, and the purpose of maximizing the utilization of the overall scheduling resources is realized.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The means for deploying Yarn computing resources described above may be embodied in the form of a computer program that is executable on a computing device such as that shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 500 is a server, and the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 5, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform a method for deploying Yarn computing resources.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be enabled to perform a method for deploying Yarn computing resources.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 5 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 5, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by the processor, implements the method for deploying Yarn computing resources according to the embodiment of the invention.
The storage medium is a physical and non-transitory storage medium, and may be various physical storage media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for allocating Yarn computing resources, comprising:
the method comprises the steps of regularly obtaining the number of current tasks submitted by a data aggregation queue and the number of current tasks submitted by a data scheduling queue, and calculating a current load factor;
setting a corresponding current priority factor according to the task priority in the data aggregation queue and the data scheduling queue;
and updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
2. The method for allocating Yarn computing resources of claim 1, wherein the current Yarn computing resource allocation of the data aggregation queue and the data scheduling queue is calculated according to the following formula:
x+y=A(x∈[1%,A%),y∈[1%,A%));
x/y=α*β(α=3.0);
Figure FDA0003448599840000011
wherein, x represents the current Yarn computing resource ratio of the data aggregation queue, y represents the current Yarn computing resource ratio of the data scheduling queue, A represents that after the current Yarn computing resource occupied by other queues is removed, the whole cluster environment can be configured to the total resource ratio of x and y, the value range of A is not more than 100 percent, beta represents the current load factor, alpha represents the current priority factor, and beta represents the total resource ratio of betaxRepresenting the current number of tasks, β, of the data aggregation queueyIndicating the current number of tasks queued for the data schedule.
3. The method for allocating Yarn computing resources of claim 1, wherein the periodically obtaining the current number of tasks in the data aggregation queue and the current number of tasks in the data scheduling queue and calculating the current load factor comprises:
starting a corresponding task by a timing script according to the starting time of different tasks and moving the task to a data aggregation queue;
judging the merging end time of each task, scheduling the corresponding task according to the merging end time by the timing script, and moving the task to the data scheduling queue;
and counting the number of the current tasks of the data aggregation queue and the number of the current tasks of the data scheduling queue according to a preset time interval, and updating the current load factor according to the ratio of the number of the current tasks of the data aggregation queue to the number of the current tasks of the data scheduling queue.
4. The method for allocating Yarn computing resources of claim 3, wherein said determining the merge end time of each task, scheduling the corresponding task according to the merge end time by the timing script, and moving the task to the data scheduling queue comprises:
in the data aggregation queue, regularly scanning whether each task has a record of a corresponding warehousing mark in the mysql table, and if so, judging that the corresponding task is ended;
and after the corresponding task is judged to be finished, the corresponding task is scheduled by the timing script, and a data scheduling program is triggered to move the task to the data scheduling queue.
5. The method for allocating Yarn computing resources of claim 4, wherein the step of periodically scanning each task in the data aggregation queue for a record of a corresponding warehousing token in the mysql table, and if so, determining that the corresponding task is finished comprises:
and in the data aggregation queue, regularly scanning the record number of the hive partition table generated in the running process of each task, or scanning whether the hdfs directory of the corresponding partition accessed to the hive partition table is generated, if the record number reaches a preset record value or the hdfs directory of the partition is generated, inserting the record of the storage mark of the corresponding task into the mysql table, and judging that the corresponding task is ended.
6. The method for allocating Yarn computing resources of claim 1, wherein after the periodically obtaining the current number of tasks in the data aggregation queue and the current number of tasks in the data scheduling queue, the method comprises:
and when the number of the current tasks of the data aggregation queue is 0, setting the current Yarn computing resource ratio of the data aggregation queue as a preset ratio.
7. The method for allocating Yarn computing resources of claim 1, wherein after the periodically obtaining the current number of tasks in the data aggregation queue and the current number of tasks in the data scheduling queue, further comprising:
and when the current task number of the data scheduling queue is 0, setting the current Yarn computing resource ratio of the data scheduling queue as a preset ratio.
8. An apparatus for deploying Yarn computing resources, comprising:
the load factor calculation unit is used for regularly acquiring the number of current tasks submitted by the data aggregation queue and the number of current tasks submitted by the data scheduling queue and calculating a current load factor;
the priority factor setting unit is used for setting a corresponding current priority factor according to the task priorities in the data aggregation queue and the data scheduling queue;
and the resource ratio updating unit is used for updating the current Yarn computing resource ratio of the data aggregation queue and the data scheduling queue according to the current load factor and the current priority factor.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for deploying Yarn computing resources according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to carry out the method for deploying Yarn computing resources according to any one of claims 1 to 7.
CN202111657174.1A 2021-12-31 2021-12-31 Method and device for allocating Yarn computing resources, computer equipment and storage medium Pending CN114296934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111657174.1A CN114296934A (en) 2021-12-31 2021-12-31 Method and device for allocating Yarn computing resources, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111657174.1A CN114296934A (en) 2021-12-31 2021-12-31 Method and device for allocating Yarn computing resources, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114296934A true CN114296934A (en) 2022-04-08

Family

ID=80973575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657174.1A Pending CN114296934A (en) 2021-12-31 2021-12-31 Method and device for allocating Yarn computing resources, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114296934A (en)

Similar Documents

Publication Publication Date Title
CN109582455B (en) Multithreading task processing method and device and storage medium
US10772115B2 (en) Resource scheduling method and server
CN111400022A (en) Resource scheduling method and device and electronic equipment
CN107688492B (en) Resource control method and device and cluster resource management system
US8627325B2 (en) Scheduling memory usage of a workload
WO2018120991A1 (en) Resource scheduling method and device
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
US10686728B2 (en) Systems and methods for allocating computing resources in distributed computing
CN110599148B (en) Cluster data processing method and device, computer cluster and readable storage medium
EP4242843A1 (en) Graphics card memory management method and apparatus, device, and system
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
JP2022539955A (en) Task scheduling method and apparatus
CN114265679A (en) Data processing method and device and server
CN105677744A (en) Method and apparatus for increasing service quality in file system
CN113254179B (en) Job scheduling method, system, terminal and storage medium based on high response ratio
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN111124674A (en) Hardware resource management method, storage medium and terminal
CN113010309B (en) Cluster resource scheduling method, device, storage medium, equipment and program product
CN112817722A (en) Time-sharing scheduling method based on priority, terminal and storage medium
CN112181498A (en) Concurrency control method, device and equipment
CN114296934A (en) Method and device for allocating Yarn computing resources, computer equipment and storage medium
CN112395063B (en) Dynamic multithreading scheduling method and system
CN115878910A (en) Line query method, device and storage medium
CN113127289B (en) Resource management method, computer equipment and storage medium based on YARN cluster
CN110955522A (en) Resource management method and system for coordination performance isolation and data recovery optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination