CN111679900B - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN111679900B
CN111679900B CN202010545414.8A CN202010545414A CN111679900B CN 111679900 B CN111679900 B CN 111679900B CN 202010545414 A CN202010545414 A CN 202010545414A CN 111679900 B CN111679900 B CN 111679900B
Authority
CN
China
Prior art keywords
resources
task
task set
tasks
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010545414.8A
Other languages
Chinese (zh)
Other versions
CN111679900A (en
Inventor
段雄
徐福生
朱志新
史雪琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010545414.8A priority Critical patent/CN111679900B/en
Publication of CN111679900A publication Critical patent/CN111679900A/en
Application granted granted Critical
Publication of CN111679900B publication Critical patent/CN111679900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a task processing method and a task processing device, wherein the method is applied to a dispatching server of a distributed computing system, and the dispatching server is used for dispatching at least one task dispatching queue, and users corresponding to each task dispatching queue are different; the method comprises the following steps: receiving a first request from a first user of a client, wherein the first request is used for requesting to execute at least one job, the at least one job corresponds to a first task set, and the first task set is positioned in a first task scheduling queue corresponding to the first user; allocating resources for the first set of tasks; and if the quantity of the allocated resources for the first task set is smaller than the minimum quantity of the resources required by the first task set and the first task waiting to be scheduled exists in the first task set, releasing the resources from the resources occupied by at least one second task set in the first task scheduling queue so as to execute the first task. The embodiment of the application can process the request of the user in real time.

Description

Task processing method and device
Technical Field
The embodiment of the application relates to computer technology, in particular to a task processing method and device.
Background
The Apache Spark system is a distributed computing system, and the data abstraction of the elastic distributed data set (Resilient Distributed Dataset, abbreviated as RDD) based on the memory is used, so that the read-write of a disk in the data processing process is greatly reduced, and the running time is greatly reduced.
After receiving a tenant request, a scheduling server in the current Apache Spark system determines that the request corresponds to at least one job (the job may be referred to as job), and divides the at least one job to obtain a plurality of tasks, where the plurality of tasks form a task set, that is, one request of the tenant corresponds to one task set. And the task sets corresponding to the requests of different tenants are distributed to the same task scheduling queue by the scheduling server. The execution order of the tasks in the task scheduling queue is the same as the order of the tasks in the task scheduling queue. If the task set 1 corresponding to the request of the tenant 1 is ordered before the task set 2 corresponding to the request of the tenant 2, when at least some tasks in the task set 1 occupy all resources, the tasks in the task set 2 need to wait for the completion of the execution of the at least some tasks. If the execution time of the at least part of tasks is long, the tasks in the task set 2 need to wait for a long time to be executed, and the user request cannot be processed in real time.
Disclosure of Invention
The embodiment of the application provides a task processing method and device, which can process a user request in real time.
In a first aspect, an embodiment of the present application provides a task processing method, where the method is applied to a scheduling server of a distributed computing system, where the scheduling server is configured to schedule at least one task scheduling queue, where the task scheduling queue includes at least one task set, and users corresponding to each task scheduling queue are different; the method comprises the following steps: receiving a first request from a first user of a client, wherein the first request is used for requesting to execute at least one job, a plurality of tasks contained in the at least one job form a first task set, and the first task set is positioned in a first task scheduling queue corresponding to the first user; allocating resources for the first set of tasks; if the number of the allocated resources for the first task set is smaller than the minimum number of resources required by the first task set and a first task waiting to be scheduled exists in the first task set, releasing resources from the resources occupied by at least one second task set in the first task scheduling queue to execute the first task, wherein the number of the resources occupied by the executing tasks in the second task set is larger than the minimum number of resources required by the second task set; wherein the number of released resources is less than or equal to a first number, which is a difference between a minimum number of resources required for the first task set and the number of allocated resources or the number of the first tasks.
In the scheme, when the number of the allocated resources for the first task set in the first task scheduling queue is smaller than the minimum number of resources required by the first task set and the first task waiting to be scheduled exists in the first task set, the resources are released from the resources occupied by at least one second task set in the first task scheduling queue so as to execute the first task, and thus, the first request corresponding to the first task set can be processed in real time. Because the second task set satisfies the condition that the number of resources occupied by the task being executed in the second task set is greater than the minimum number of resources required by the second task set, the request corresponding to at least one second task set of the first task scheduling queue can also be processed in real time. In addition, different users correspond to different task scheduling queues, so that the scheduling server can flexibly schedule the resource scheduling strategy of each task scheduling queue.
In one possible implementation, if the difference between the minimum number of resources required by the first task set and the allocated number of resources is greater than the first number of tasks, the first number is the first number of tasks; if the difference between the minimum number of resources required by the first task set and the allocated number of resources is less than or equal to the number of the first tasks, the first number is the difference between the minimum number of resources required by the first task set and the allocated number of resources.
In the scheme, when the difference between the minimum number of resources required by the first task set and the number of resources allocated for the first task set is larger than the number of first tasks waiting to be scheduled in the first task set, the resource requirement of the first task set, which is processed in real time, can be met by releasing the number of resources which are the number of the first tasks from the resources occupied by at least one second task set, without releasing the resources which are the difference between the minimum number of resources required by the first task set and the number of the resources allocated for the first task set, and the influence on the second task set in the first scheduling queue can be reduced on the basis of saving system resources. When the difference between the minimum resource quantity required by the first task set and the resource quantity allocated for the first task set is smaller than or equal to the quantity of the first tasks waiting to be scheduled in the first task set, the fact that the quantity of the resources allocated for the first task set is equal to the difference is determined, and the tasks in the first task set can be executed in time can be guaranteed.
In a possible implementation manner, the releasing the resources from the resources occupied by the at least one second task set includes: performing a first operation, the first operation comprising: judging whether an ith task set is a second task set or not, if so, releasing resources from the resources occupied by the ith task set, wherein the ith task set is other task sets except the first task set; initially, i is 1; and adding 1 to the i, and repeatedly executing the first operation until the number of released resources is equal to the first number, or the number of released resources when i=m is smaller than or equal to the first number, wherein M is the total number of task sets included in the first task scheduling queue minus 1.
The scheme gives a specific implementation of releasing resources from the resources occupied by at least one second task set.
In a possible implementation manner, the releasing the resource from the resources occupied by the ith task set includes: releasing a second amount of resources from the resources occupied by the ith task set, wherein the second amount is less than or equal to a third amount, and the third amount is the difference between the amount of resources occupied by the ith task set and the minimum amount of resources required by the ith task set.
In the scheme, the quantity of the resources released from the resources occupied by the ith task set is smaller than or equal to the difference between the quantity of the resources occupied by the ith task set and the quantity of the minimum resources required by the ith task set, so that the ith task set can be ensured to have at least the quantity of the minimum resources required by the ith task set, and the tasks in the ith task set can be still processed in time after the resources occupied by the ith task set are released.
In a possible implementation manner, the second number of resources is the resources occupied by the second number of executing tasks, and the start execution time of at least one task in the second number of executing tasks is later than the start execution time of other executing tasks in the ith task set.
Because the resources occupied by the executing tasks in the ith task set are released, and the executing tasks need to be executed again, the scheme releases the resources occupied by the executing tasks with late starting execution time in the ith task set, and can reduce the influence on the executing tasks in the ith task set.
In one possible implementation, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a first parameter indicating the allocated amount of resources, a second parameter indicating the minimum amount of resources required for the first set of tasks, and a third parameter indicating the amount of the first tasks; before releasing the resources from the resources occupied by the at least one second set of tasks, further comprises: obtaining the allocated resource quantity according to the first parameter; obtaining the minimum resource quantity required by the first task set according to the second parameter; and obtaining the number of the first tasks according to the third parameter.
The scheme provides a specific implementation method for obtaining the number of first tasks waiting to be scheduled in the first task set, the number of resources allocated for the first task set and the minimum number of resources required by the first task set.
In a possible implementation manner, before the releasing the resources from the resources occupied by the at least one second task set, the method further includes: determining that resources are not released from resources occupied by the at least one second task set to the first task set.
In the scheme, resources are released from resources occupied by at least one second task set to execute the first task in the first task set only once, so that the complexity of task resource scheduling can be reduced.
In one possible implementation, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a fourth parameter indicating that resources are not released from resources occupied by the at least one second task set to the first task set; the determining that resources are not released from resources occupied by the at least one second task set to the first task set includes: and according to the fourth parameter, determining that resources which are not released from the resources occupied by the at least one second task set are not released to the first task set.
The scheme provides a specific implementation method for determining whether resources are released from the resources occupied by the at least one second task set to the first task set.
In a second aspect, an embodiment of the present application provides a task processing device, where the task processing device is at least part of a scheduling server of a distributed computing system, where the scheduling server is configured to schedule at least one task scheduling queue, where the task scheduling queue includes at least one task set, and users corresponding to each task scheduling queue are different; the device comprises: the system comprises a receiving and transmitting module, a first scheduling module and a second scheduling module, wherein the receiving and transmitting module is used for receiving a first request from a first user of a client, the first request is used for requesting to execute at least one job, a plurality of tasks contained in the at least one job form a first task set, and the first task set is positioned in a first task scheduling queue corresponding to the first user; a processing module, configured to allocate resources for the first task set; if the number of resources allocated to the first task set is smaller than the minimum number of resources required by the first task set, and a first task waiting to be scheduled exists in the first task set, the processing module is further configured to release resources from resources occupied by at least one second task set in the first task scheduling queue, so as to execute the first task, where the number of resources occupied by the task being executed in the second task set is greater than the minimum number of resources required by the second task set; wherein the number of released resources is less than or equal to a first number, which is a difference between a minimum number of resources required for the first task set and the number of allocated resources or the number of the first tasks.
In one possible implementation, if the difference between the minimum number of resources required by the first task set and the allocated number of resources is greater than the first number of tasks, the first number is the first number of tasks; if the difference between the minimum number of resources required by the first task set and the allocated number of resources is less than or equal to the number of the first tasks, the first number is the difference between the minimum number of resources required by the first task set and the allocated number of resources.
In a possible implementation manner, the processing module is specifically configured to: performing a first operation, the first operation comprising: judging whether an ith task set is a second task set or not, if so, releasing resources from the resources occupied by the ith task set, wherein the ith task set is other task sets except the first task set; initially, i is 1; and adding 1 to the i, and repeatedly executing the first operation until the number of released resources is equal to the first number, or the number of released resources when i=m is smaller than or equal to the first number, wherein M is the total number of task sets included in the first task scheduling queue minus 1.
In a possible implementation manner, the processing module is specifically configured to: releasing a second amount of resources from the resources occupied by the ith task set, wherein the second amount is less than or equal to a third amount, and the third amount is the difference between the amount of resources occupied by the ith task set and the minimum amount of resources required by the ith task set.
In a possible implementation manner, the second number of resources is the resources occupied by the second number of executing tasks, and the start execution time of at least one task in the second number of executing tasks is later than the start execution time of other executing tasks in the ith task set.
In one possible implementation, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a first parameter indicating the allocated amount of resources, a second parameter indicating the minimum amount of resources required for the first set of tasks, and a third parameter indicating the amount of the first tasks; before the processing module releases resources from the resources occupied by the at least one second set of tasks, the processing module is further configured to: obtaining the allocated resource quantity according to the first parameter; obtaining the minimum resource quantity required by the first task set according to the second parameter; and obtaining the number of the first tasks according to the third parameter.
In a possible implementation, before the processing module releases resources from the resources occupied by the at least one second set of tasks, the processing module is further configured to: determining that resources are not released from resources occupied by the at least one second task set to the first task set.
In one possible implementation, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a fourth parameter indicating that resources are not released from resources occupied by the at least one second task set to the first task set; the processing module is specifically configured to determine, according to the fourth parameter, that resources are not released from resources occupied by the at least one second task set to the first task set.
In a third aspect, an embodiment of the present application provides an electronic device, including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium, including: computer-executable instructions for implementing the method of the first aspect or any of the possible implementation manners of the first aspect.
In the application, when the quantity of the allocated resources for the first task set in the first task scheduling queue is smaller than the minimum quantity of the resources required by the first task set and the first task waiting to be scheduled exists in the first task set, the resources are released from the resources occupied by at least one second task set in the first task scheduling queue so as to execute the first task, so that the first request corresponding to the first task set can be processed in real time. Because the second task set satisfies the condition that the number of resources occupied by the task being executed in the second task set is greater than the minimum number of resources required by the second task set, the request corresponding to at least one second task set of the first task scheduling queue can also be processed in real time. In addition, different users correspond to different task scheduling queues, so that the scheduling server can flexibly schedule the resource scheduling strategy of each task scheduling queue.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a diagram illustrating a distributed system scheduling scheme I;
FIG. 2 is a diagram illustrating a second conventional distributed system scheduling scheme;
FIG. 3 is a system architecture diagram provided in an embodiment of the present application;
FIG. 4 is a flowchart illustrating a task processing method according to an embodiment of the present application;
FIG. 5 is a first schematic diagram of a distributed system scheduling according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of distributed system scheduling according to an embodiment of the present application;
FIG. 7 is a second flowchart of a task processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a task processing device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. The terms "first," "second," and the like, herein, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
Elements according to the embodiments of the present application will be described below.
Clients submit tenant (also referred to as user) triggered requests, such as requests to query inventory, to a dispatch server of the distributed computing system. The scheduling server determines at least one job corresponding to the request according to the request. Each job is split into one or more stages, each stage internally comprises one or more tasks (tasks) which are executed concurrently, so to speak, each job is split into a plurality of tasks, and the plurality of tasks into which each job is split can be called as a plurality of tasks corresponding to the job. The stages into which a job is divided are sequentially executed in the execution order.
Currently, a scheduling server distributes jobs requested to be processed by requests triggered by different tenants to different scheduling thread pools, and jobs requested to be processed by requests triggered by the same tenant are distributed to the same scheduling thread pool. As shown in fig. 1 and 2, the scheduling server determines that the job requested to be processed by the request triggered by the tenant 1 is job 1, job 1 is allocated to the scheduling thread pool 1, and job 1 analyzed from the scheduling thread pool 1 is allocated to the job queue. The scheduling server forms a task set 1 by dividing the job 1 analyzed from the job queue into a plurality of tasks, namely, the request triggered by the tenant 1 corresponds to the task set 1. If the scheduling server also receives the request triggered by the tenant 2, the scheduling server determines that the job requested to be processed by the request triggered by the tenant 2 is the job 2, the job 2 is allocated to the scheduling thread pool 2, and the job 2 and the job 1 analyzed from the scheduling thread pool 2 are allocated to the same job queue. The scheduling server forms a task set 2 by dividing the job 2 analyzed from the job queue into a plurality of tasks, namely, the request triggered by the tenant 2 corresponds to the task set 2. Finally, each task in the task set 1 and each task in the task set 2 are located in the same task scheduling queue. That is, although jobs requested to be processed by different tenant-triggered requests are currently allocated to different dispatch thread pools, a plurality of tasks of jobs requested to be processed by different tenant-triggered requests are in the same task dispatch queue.
The scheduling server executes the tasks in the task scheduling queue according to the order of the tasks in the task scheduling queue. If the task set 1 corresponding to the request of the tenant 1 is before the task set 2 corresponding to the request of the tenant 2 in the task scheduling queue, and at least part of the tasks in the task set 1 occupy all resources, the tasks in the task set 2 need to wait for the completion of the execution of the at least part of the tasks. If the execution time of the at least part of the tasks is long, the tasks in the task set 2 need to wait for a long time to be executed, i.e. the request of the tenant 2 cannot be processed in time.
In order to solve the above technical problems, the inventor finds that when the scheduling server receives a new request and the current system resource cannot meet the resource requirement of a new task set corresponding to the new request, part of resources occupied by other tasks in other task sets in a task scheduling queue where the new task set is located can be released to execute tasks waiting to be scheduled in the new task set, so that the requirement of timely processing tenant requests can be met. Wherein the task waiting to be scheduled is a task that is ready to be executed immediately but for which no resources are currently available.
The inventor discovers that if the number of the allocated resources of the new task set is smaller than the minimum number of resources required by the new task set and the task waiting to be scheduled exists in the first task set, it can be determined that the current system resource cannot meet the resource requirement of the new task set corresponding to the new request, and at the moment, the step of releasing part of resources occupied by tasks in other task sets in a task scheduling queue where the new task set is located can be triggered. However, the current technology does not record the number of resources allocated to the task set, so it cannot be determined in the current technology whether the current system resource meets the resource requirement of the task in the new task set, and in order to overcome this technical difficulty, it is required to enable the scheduling server to obtain the number of resources allocated to the new task set through a technical means. The inventor finds out that one possible technical means for enabling the scheduling server to obtain the number of resources allocated by the task set is to add a new parameter to the scheduling thread pool, wherein the new parameter indicates the number of resources allocated by the task set obtained after the jobs allocated to the scheduling thread pool are divided.
It should be noted that each task occupies one core, and one core should occupy one resource, that is, each task occupies one resource when executing, that is, if the resources occupied by N executing tasks are released, N cores or N resources are released.
Fig. 3 is a system architecture diagram according to an embodiment of the present application. Referring to fig. 3, the system architecture diagram includes: and the distributed computing system comprises a scheduling server. The distributed computing system may be an Apache Spark system, among others.
The task processing method of the present application will be described below with reference to specific examples.
Fig. 4 is a flowchart of a task processing method according to an embodiment of the present application. The execution body of the present embodiment may be a scheduling server in a distributed computing system. Referring to fig. 4, the method of the present embodiment includes:
step S401, a first request from a first user of a client is received, where the first request is used for requesting to execute at least one job, and a plurality of tasks included in the at least one job form a first task set, where the first task set is located in a first task scheduling queue corresponding to the first user. The task included in the job refers to a plurality of tasks obtained by splitting the job.
In a specific implementation, the scheduling server receives a first request triggered by a first user and sent by the client, where the first request is used to request the scheduling server to execute at least one job, and the first request may be, for example, a request for querying inventory. The scheduling server determines at least one job corresponding to the first request, wherein the at least one job corresponding to the first request is used for requesting to execute the at least one job for the first request. The scheduling server distributes at least one job to the first scheduling thread pool, distributes at least one job analyzed from the first scheduling thread pool to the first job queue, and segments each job in the at least one job analyzed from the first job queue into a plurality of stages, each stage comprising a plurality of tasks, the plurality of tasks obtained by the segmentation of the at least one job comprising a first task set. And the scheduling server distributes the first task set to a first task scheduling queue corresponding to the first user.
Optionally, the scheduling server is configured to schedule at least one task scheduling queue, where each task scheduling queue includes at least one task set, and each task scheduling queue corresponds to a different user. At this time, the first task scheduling queue corresponding to the first user and the second task scheduling queue corresponding to the second user are not the same task scheduling queue.
For example, referring to fig. 5 and 6, the scheduling server determines that the job corresponding to the request triggered by the tenant 1 is job 1, job 1 is allocated to the scheduling thread pool 1, and job 1 analyzed from the scheduling thread pool 1 is allocated to the job queue 1. The scheduling server divides the job 1 analyzed from the job queue into a plurality of tasks, and the plurality of tasks divided into the job 1 constitute the job scheduling queue 1. If the scheduling server also receives the request triggered by the tenant 2, the scheduling server determines that the request triggered by the tenant 2 corresponds to the job 2, the job 2 is allocated to the scheduling thread pool 2, and the job 2 analyzed from the scheduling thread pool 2 is allocated to the job queue 2. The scheduling server divides the job 2 analyzed from the job queue into a plurality of tasks, and the plurality of tasks divided into the job 2 constitute the job scheduling queue 2.
The users corresponding to each task scheduling queue are different, so that the task scheduling queues among the users can be isolated, and the scheduling server can execute different resource scheduling strategies for the task scheduling queues corresponding to different users, namely, the scheduling server is more flexible to the resource scheduling strategies of each task scheduling queue.
Optionally, the scheduling server is configured to schedule one task scheduling queue, and task sets corresponding to different requests triggered by different users are allocated to the same task scheduling queue. At this time, the task scheduling queue corresponding to the first user is the same as the task scheduling queue corresponding to the second user, i.e. each user corresponds to the first task scheduling queue.
Step S402, resources are allocated for the first task set.
After the first task set is obtained, the scheduling server allocates resources for the first task set.
In step S403, if the number of allocated resources for the first task set is smaller than the minimum number of resources required by the first task set and there is a first task waiting to be scheduled in the first task set, then the resources are released from the resources occupied by at least one second task set included in the first task scheduling queue to execute the first task, and the number of resources occupied by the executing task in the second task set is greater than the minimum number of resources required by the second task set.
In order to process the first request in real time, the scheduling server obtains the quantity of resources allocated for the first task set and the minimum quantity of resources required by the first task set, and when the quantity of resources allocated for the first task set is smaller than the minimum quantity of resources required by the first task set and the first task set has a first task waiting to be scheduled, the scheduling server indicates that the resources for executing the tasks in the first task set are lack, and the resources need to be scheduled for the first task set, namely, the resources need to be released from the resources occupied by at least one second task set included in the first task scheduling queue so as to execute the first task. Wherein the second set of tasks satisfies the following condition: the second set of tasks occupies a greater amount of resources than the minimum amount of resources required by the second set of tasks. When the second task set is a set meeting the above conditions, the phenomenon that the request corresponding to the second task set in the first task scheduling queue cannot be processed in real time for processing the first request in real time can be avoided.
Optionally, the amount of resources allocated for the first task set may be indicated by a first parameter of the first scheduling thread pool, that is, the amount of resources allocated for the first task set may be obtained according to the first parameter. Wherein the first parameter may be referred to as a maxshare parameter of the first dispatch thread pool.
Optionally, the minimum number of resources required by the first task set may be indicated by a second parameter of the first scheduling thread pool, that is, the minimum number of resources required by the first task set may be obtained according to the second parameter. Wherein the second parameter may be referred to as the minshare parameter of the first dispatch thread pool.
Optionally, the parameters of the first scheduling thread pool may further include a fourth parameter, where the fourth parameter indicates that resources are scheduled for the first task set or resources are not scheduled for the first task set, and scheduling resources for the first task set refers to releasing resources from resources occupied by at least one second task set to the first task set. The fourth parameter is used for ensuring that the first task set is obtained and the task execution in the first task set is completed, and only once resource is scheduled for the first task set. The tasks in the first task set are not executed, but resources are scheduled for the first task set, the fourth parameter indicates that resources are scheduled for the first task set, the tasks in the first task set are not executed, and resources are not scheduled for the first task set, and the fourth parameter indicates that resources are not scheduled for the first task set. The fourth parameter may be referred to as a hasScheduler parameter of the first scheduling thread pool. When the parameters of the first scheduling thread pool include the fourth parameter, the complexity of resource scheduling can be reduced.
Accordingly, when the first scheduling thread pool includes the fourth parameter, when the amount of resources allocated for the first task set is smaller than the minimum amount of resources required by the first task set, and there is a first task waiting to be scheduled in the first task set and resources which are not released from the resources occupied by the at least one second task set to the first task set, resources are released from the resources occupied by the at least one second task set of the first task scheduling queue to the first task set.
In addition, in order not to waste system resources, the released resources from the resources occupied by the at least one second task set are smaller than or equal to a first number, which is the difference between the minimum number of resources required for the first task set and the number of resources allocated for the first task set or the number of first tasks.
And if the difference between the minimum number of resources required by the first task set and the number of resources allocated for the first task set is greater than the number of the first tasks, the first number is the number of the first tasks.
When the difference between the minimum number of resources required by the first task set and the number of resources allocated for the first task set is greater than the number of first tasks waiting to be scheduled in the first task set, the resource requirement of the first task set, which is processed in real time, can be met by releasing the number of resources which are the number of the first tasks from the resources occupied by at least one second task set, without releasing the resources which are the difference between the minimum number of resources required by the first task set and the number of resources allocated for the first task set, and the influence on the second task set in the first scheduling queue can be reduced on the basis of saving system resources.
If the difference between the minimum number of resources required for the first task set and the number of resources allocated for the first task set is less than or equal to the number of first tasks, the first number is the difference between the minimum number of resources required for the first task set and the number of resources allocated for the first task set.
When the difference between the minimum resource quantity required by the first task set and the resource quantity allocated for the first task set is smaller than or equal to the quantity of the first tasks waiting to be scheduled in the first task set, the fact that the quantity of the resources allocated for the first task set is equal to the difference is determined, and the tasks in the first task set can be executed in time can be guaranteed.
Optionally, the number of the first tasks waiting to be scheduled in the first task set may be indicated by a third parameter of the first scheduling thread pool, that is, the number of the first tasks waiting to be scheduled in the first task set may be obtained according to the third parameter. Wherein the third parameter may be referred to as the pendingpass parameter of the first dispatch thread pool.
In this embodiment, when the number of allocated resources for the first task set in the first task scheduling queue is smaller than the minimum number of resources required by the first task set and there is a first task waiting to be scheduled in the first task set, resources are released from resources occupied by at least one second task set in the first task scheduling queue to execute the first task, so that a first request corresponding to the first task set can be processed in real time. Because the second task set satisfies the condition that the number of resources occupied by the task being executed in the second task set is greater than the minimum number of resources required by the second task set, the request corresponding to at least one second task set of the first task scheduling queue can also be processed in real time. In addition, when different users correspond to different task scheduling queues, the scheduling server can also be enabled to be more flexible to the resource scheduling strategy of each task scheduling queue.
One specific implementation of "freeing resources from resources occupied by at least one second task set" in step S403 in the embodiment shown in fig. 4 will be described below with a specific embodiment.
Fig. 7 is a second flowchart of a task processing method provided in an embodiment of the present application, referring to fig. 7, the method in this embodiment includes:
step S701, executing a first operation, where the first operation includes: judging whether the ith task set is a second task set or not, if so, releasing resources from the resources occupied by the ith task set, wherein the ith task set is other task sets except the first task set; initially, i is 1.
In order that the task in the ith task set can be processed in time after the resource occupied by the ith task set is released, releasing the resource from the resource occupied by the ith task set comprises the following steps: releasing a second amount of resources from the resources occupied by the ith task set, wherein the second amount is smaller than or equal to a third amount, and the third amount is the difference between the amount of resources occupied by the ith task set and the minimum amount of resources required by the ith task set.
That is, the number of resources that can be released at most from the resources occupied by the ith task set is a third number, which is the difference between the number of resources occupied by the ith task set and the minimum number of resources required by the ith task set. In one aspect, if the sum of the first total number of resources currently released from the other second task sets and the third number is less than or equal to the first number, the second number of resources released from the resources occupied by the ith task set may be equal to the third number. If the sum of the first total number of resources released from the other second task sets and the third number is greater than the first number, the second number of resources released from the resources occupied by the ith task set may be equal to the first number minus the first total number, where the second number is less than the third number.
It will be appreciated that the second amount of resources released from the resources occupied by the ith set of tasks is the resources occupied by the second amount of executing tasks. In one approach, the resources occupied by the second number of executing tasks may be any second number of executing tasks of all executing tasks in the ith task set. In another aspect, the start execution time of at least one task of the second number of executing tasks is later than the start execution time of other executing tasks of the ith task set. Since the resources occupied by the executing tasks in the ith task set are released, and then the executing tasks need to be executed again, the resource occupied by the executing tasks with late starting execution time in the ith task set is released by the other scheme, so that the influence on the executing tasks in the ith task set can be reduced.
Step S702, add 1 to i, and repeatedly perform the first operation until the number of released resources is equal to the first number, or the number of released resources when i=m is less than or equal to the first number, where M is the total number of task sets included in the first task scheduling queue minus 1.
The embodiment is applicable to a scenario in which resources are less than or equal to a first number are released from resources occupied by at least one second task set.
The embodiment provides a specific implementation of releasing resources from resources occupied by at least one second task set, and the method can reduce the influence on tasks in other task sets on the basis of ensuring that the request of the user corresponding to the first task set can be processed in real time.
The method according to the present application is described above, and the apparatus according to the present application will be described below with reference to specific examples.
Fig. 8 is a schematic structural diagram of a task processing device according to an embodiment of the present application, where the task processing device is at least part of a scheduling server of a distributed computing system, and the scheduling server is configured to schedule at least one task scheduling queue, where the task scheduling queue includes at least one task set, and users corresponding to each task scheduling queue are different. As shown in fig. 8, the apparatus of this embodiment may include: a transceiver module 81 and a processing module 82.
The transceiver module 81 is configured to receive a first request from a first user of a client, where the first request is used to request to execute at least one job, and a plurality of tasks included in the at least one job form a first task set, where the first task set is located in a first task scheduling queue corresponding to the first user; the processing module 82 is configured to allocate resources for the first task set; if the number of allocated resources for the first task set is smaller than the minimum number of resources required by the first task set, and there is a first task waiting to be scheduled in the first task set, the processing module 82 is further configured to release resources from resources occupied by at least one second task set in the first task scheduling queue, so as to execute the first task, where the number of resources occupied by the executing task in the second task set is greater than the minimum number of resources required by the second task set; wherein the number of released resources is less than or equal to a first number, which is a difference between a minimum number of resources required for the first task set and the number of allocated resources or the number of the first tasks.
Optionally, if the difference between the minimum number of resources required by the first task set and the allocated number of resources is greater than the number of first tasks, the first number is the number of first tasks; if the difference between the minimum number of resources required by the first task set and the allocated number of resources is less than or equal to the number of the first tasks, the first number is the difference between the minimum number of resources required by the first task set and the allocated number of resources.
Optionally, the processing module 82 is specifically configured to: performing a first operation, the first operation comprising: judging whether an ith task set is a second task set or not, if so, releasing resources from the resources occupied by the ith task set, wherein the ith task set is other task sets except the first task set; initially, i is 1; and adding 1 to the i, and repeatedly executing the first operation until the number of released resources is equal to the first number, or the number of released resources when i=m is smaller than or equal to the first number, wherein M is the total number of task sets included in the first task scheduling queue minus 1.
Optionally, the processing module 82 is specifically configured to: releasing a second amount of resources from the resources occupied by the ith task set, wherein the second amount is less than or equal to a third amount, and the third amount is the difference between the amount of resources occupied by the ith task set and the minimum amount of resources required by the ith task set.
Optionally, the second number of resources is the resources occupied by the second number of executing tasks, and the start execution time of at least one task in the second number of executing tasks is later than the start execution time of other executing tasks in the ith task set.
Optionally, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a first parameter indicating the allocated amount of resources, a second parameter indicating the minimum amount of resources required for the first set of tasks, and a third parameter indicating the amount of the first tasks; before the processing module 82 releases resources from the resources occupied by the at least one second set of tasks, the processing module 82 is further configured to: obtaining the allocated resource quantity according to the first parameter; obtaining the minimum resource quantity required by the first task set according to the second parameter; and obtaining the number of the first tasks according to the third parameter.
Optionally, before the processing module 82 releases resources from the resources occupied by the at least one second set of tasks, the processing module 82 is further configured to: determining that resources are not released from resources occupied by the at least one second task set to the first task set.
Optionally, the first job is allocated to a first scheduling thread pool, the first scheduling thread pool including a fourth parameter indicating that resources are not released from resources occupied by the at least one second task set to the first task set; the processing module 82 is specifically configured to determine, according to the fourth parameter, that resources are not released from resources occupied by the at least one second task set to the first task set.
The device of the present embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application, referring to fig. 9, an electronic device 900 of the present embodiment may be a server where a device that causes a positioning track error as described above is located. The electronic device may be used to implement a method corresponding to a server where the device for locating the cause of the track error described in the above method embodiment is located, and specifically, reference may be made to the description in the above method embodiment.
The electronic device may comprise one or more processors 901, which processors 901 may also be referred to as processing units, may implement certain control functions. The processor 901 may be a general purpose processor or a special purpose processor, etc.
In an alternative design, the processor 901 may also have stored thereon instructions and/or data 903, where the instructions and/or data 903 may be executed by the processor to cause the electronic device to perform the method described in the method embodiments above.
In another alternative design, a transceiver unit for implementing the receive and transmit functions may be included in processor 901. For example, the transceiver unit may be a transceiver circuit, or an interface circuit. The transceiver circuitry, interface or interface circuitry for implementing the receive and transmit functions may be separate or may be integrated. The transceiver circuit, interface or interface circuit may be used for reading and writing codes/data, or the transceiver circuit, interface or interface circuit may be used for transmitting or transferring signals.
Optionally, the electronic device may include one or more memories 902, on which instructions 904 may be stored, which may be executed on the processor, to cause the electronic device to perform the methods described in the method embodiments above. Optionally, the memory may further store data. In the alternative, the processor may store instructions and/or data. The processor and the memory may be provided separately or may be integrated.
Optionally, the electronic device may further comprise a transceiver 905 and/or an antenna 906. The processor 901 may be referred to as a processing unit for controlling the electronic device. The transceiver 905 may be referred to as a transceiver unit, a transceiver circuit, a transceiver, etc. for implementing a transceiver function.
The processors and transceivers described in this embodiment may be fabricated using a variety of IC process technologies, such as complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS), N-type metal oxide semiconductor (NMOS), P-type metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (Bipolar Junction Transistor, BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), and the like.
It should be appreciated that the processor in embodiments of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The scope of the electronic device described in the embodiment of the present application is not limited thereto, and the structure of the electronic device may not be limited by fig. 9.
The embodiment of the application also provides a computer storage medium, which comprises: comprising a program or instructions which, when run on a computer, performs a method as described in any of the method embodiments described above.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (10)

1. The task processing method is characterized by being applied to a scheduling server of a distributed computing system, wherein the scheduling server is used for scheduling at least one task scheduling queue, the task scheduling queue comprises at least one task set, and users corresponding to each task scheduling queue are different; the method comprises the following steps:
receiving a first request from a first user of a client, wherein the first request is used for requesting to execute at least one job, a plurality of tasks contained in the at least one job form a first task set, and the first task set is positioned in a first task scheduling queue corresponding to the first user;
allocating resources for the first set of tasks;
if the number of the allocated resources for the first task set is smaller than the minimum number of resources required by the first task set and a first task waiting to be scheduled exists in the first task set, releasing resources from the resources occupied by at least one second task set in the first task scheduling queue to execute the first task, wherein the number of the resources occupied by the executing tasks in the second task set is larger than the minimum number of resources required by the second task set; wherein the number of released resources is less than or equal to a first number, which is a difference between a minimum number of resources required for the first task set and the number of allocated resources or the number of the first tasks;
If the difference between the minimum number of resources required by the first task set and the allocated number of resources is greater than the number of the first tasks, the first number is the number of the first tasks;
if the difference between the minimum number of resources required by the first task set and the allocated number of resources is less than or equal to the number of the first tasks, the first number is the difference between the minimum number of resources required by the first task set and the allocated number of resources.
2. The method of claim 1, wherein releasing resources from resources occupied by at least one second set of tasks comprises:
performing a first operation, the first operation comprising: judging whether an ith task set is a second task set or not, if so, releasing resources from the resources occupied by the ith task set, wherein the ith task set is other task sets except the first task set; initially, i is 1;
and adding 1 to the i, and repeatedly executing the first operation until the number of released resources is equal to the first number, or the number of released resources when i=m is smaller than or equal to the first number, wherein M is the total number of task sets included in the first task scheduling queue minus 1.
3. The method of claim 2, wherein the releasing resources from the resources occupied by the ith set of tasks comprises:
releasing a second amount of resources from the resources occupied by the ith task set, wherein the second amount is less than or equal to a third amount, and the third amount is the difference between the amount of resources occupied by the ith task set and the minimum amount of resources required by the ith task set.
4. A method according to claim 3, wherein the second number of resources is the resources occupied by the second number of executing tasks, and wherein the start execution time of at least one of the second number of executing tasks is later than the start execution time of the other executing tasks in the ith set of tasks.
5. The method of claim 1, wherein a first job is allocated to a first dispatch thread pool, the first dispatch thread pool including a first parameter indicating the allocated amount of resources, a second parameter indicating a minimum amount of resources required for the first set of tasks, and a third parameter indicating the amount of the first tasks; before releasing the resources from the resources occupied by the at least one second set of tasks, further comprises:
Obtaining the allocated resource quantity according to the first parameter;
obtaining the minimum resource quantity required by the first task set according to the second parameter;
and obtaining the number of the first tasks according to the third parameter.
6. The method of claim 1, further comprising, prior to said releasing resources from resources occupied by at least one second set of tasks:
determining that resources are not released from resources occupied by the at least one second task set to the first task set.
7. The method of claim 6, wherein a first job is assigned to a first dispatch thread pool, the first dispatch thread pool including a fourth parameter indicating that resources are not released from resources occupied by the at least one second task set to the first task set; the determining that resources are not released from resources occupied by the at least one second task set to the first task set includes:
and according to the fourth parameter, determining that resources which are not released from the resources occupied by the at least one second task set are not released to the first task set.
8. A task processing device, wherein the task processing device is at least part of a scheduling server of a distributed computing system, the scheduling server is used for scheduling at least one task scheduling queue, the task scheduling queue comprises at least one task set, and users corresponding to each task scheduling queue are different; the device comprises:
the system comprises a receiving and transmitting module, a first scheduling module and a second scheduling module, wherein the receiving and transmitting module is used for receiving a first request from a first user of a client, the first request is used for requesting to execute at least one job, a plurality of tasks contained in the at least one job form a first task set, and the first task set is positioned in a first task scheduling queue corresponding to the first user;
a processing module, configured to allocate resources for the first task set;
if the number of resources allocated to the first task set is smaller than the minimum number of resources required by the first task set, and a first task waiting to be scheduled exists in the first task set, the processing module is further configured to release resources from resources occupied by at least one second task set in the first task scheduling queue, so as to execute the first task, where the number of resources occupied by the task being executed in the second task set is greater than the minimum number of resources required by the second task set; wherein the number of released resources is less than or equal to a first number, which is a difference between a minimum number of resources required for the first task set and the number of allocated resources or the number of the first tasks; if the difference between the minimum number of resources required by the first task set and the allocated number of resources is greater than the number of the first tasks, the first number is the number of the first tasks; if the difference between the minimum number of resources required by the first task set and the allocated number of resources is less than or equal to the number of the first tasks, the first number is the difference between the minimum number of resources required by the first task set and the allocated number of resources.
9. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A computer storage medium, comprising: computer-executable instructions for implementing the method as claimed in any one of claims 1 to 7.
CN202010545414.8A 2020-06-15 2020-06-15 Task processing method and device Active CN111679900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010545414.8A CN111679900B (en) 2020-06-15 2020-06-15 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010545414.8A CN111679900B (en) 2020-06-15 2020-06-15 Task processing method and device

Publications (2)

Publication Number Publication Date
CN111679900A CN111679900A (en) 2020-09-18
CN111679900B true CN111679900B (en) 2023-10-31

Family

ID=72436219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010545414.8A Active CN111679900B (en) 2020-06-15 2020-06-15 Task processing method and device

Country Status (1)

Country Link
CN (1) CN111679900B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204433B (en) * 2021-07-02 2021-10-22 上海钐昆网络科技有限公司 Dynamic allocation method, device, equipment and storage medium for cluster resources
CN113986497B (en) * 2021-10-27 2022-11-22 北京百度网讯科技有限公司 Queue scheduling method, device and system based on multi-tenant technology
CN117311957A (en) * 2022-06-27 2023-12-29 华为技术有限公司 Resource scheduling method, device and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902587A (en) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 Distribution type task scheduling method, distribution type task scheduling system and distribution type task scheduling device
CN107341050A (en) * 2016-04-28 2017-11-10 北京京东尚科信息技术有限公司 Service processing method and device based on dynamic thread pool
CN107992359A (en) * 2017-11-27 2018-05-04 江苏海平面数据科技有限公司 The task scheduling algorithm that cost perceives under a kind of cloud environment
CN109034396A (en) * 2018-07-11 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for handling the deep learning operation in distributed type assemblies
CN109240825A (en) * 2018-08-14 2019-01-18 阿里巴巴集团控股有限公司 Elastic method for scheduling task, device, equipment and computer readable storage medium
CN109298936A (en) * 2018-09-11 2019-02-01 华为技术有限公司 A kind of resource regulating method and device
CN109684092A (en) * 2018-12-24 2019-04-26 新华三大数据技术有限公司 Resource allocation methods and device
CN109815019A (en) * 2019-02-03 2019-05-28 普信恒业科技发展(北京)有限公司 Method for scheduling task, device, electronic equipment and readable storage medium storing program for executing
CN110609742A (en) * 2019-09-25 2019-12-24 苏州浪潮智能科技有限公司 Method and device for configuring queues of Kubernetes scheduler

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10761886B2 (en) * 2018-09-04 2020-09-01 International Business Machines Corporation Dynamically optimizing load in cloud computing platform using real-time analytics

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902587A (en) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 Distribution type task scheduling method, distribution type task scheduling system and distribution type task scheduling device
CN107341050A (en) * 2016-04-28 2017-11-10 北京京东尚科信息技术有限公司 Service processing method and device based on dynamic thread pool
CN107992359A (en) * 2017-11-27 2018-05-04 江苏海平面数据科技有限公司 The task scheduling algorithm that cost perceives under a kind of cloud environment
CN109034396A (en) * 2018-07-11 2018-12-18 北京百度网讯科技有限公司 Method and apparatus for handling the deep learning operation in distributed type assemblies
CN109240825A (en) * 2018-08-14 2019-01-18 阿里巴巴集团控股有限公司 Elastic method for scheduling task, device, equipment and computer readable storage medium
CN109298936A (en) * 2018-09-11 2019-02-01 华为技术有限公司 A kind of resource regulating method and device
CN109684092A (en) * 2018-12-24 2019-04-26 新华三大数据技术有限公司 Resource allocation methods and device
CN109815019A (en) * 2019-02-03 2019-05-28 普信恒业科技发展(北京)有限公司 Method for scheduling task, device, electronic equipment and readable storage medium storing program for executing
CN110609742A (en) * 2019-09-25 2019-12-24 苏州浪潮智能科技有限公司 Method and device for configuring queues of Kubernetes scheduler

Also Published As

Publication number Publication date
CN111679900A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
US10606653B2 (en) Efficient priority-aware thread scheduling
CN111679900B (en) Task processing method and device
US11941434B2 (en) Task processing method, processing apparatus, and computer system
US7734833B2 (en) Method for scheduling operations called by a task on a real-time or non-real time processor
US7822885B2 (en) Channel-less multithreaded DMA controller
US8963933B2 (en) Method for urgency-based preemption of a process
US9448864B2 (en) Method and apparatus for processing message between processors
CN107515786B (en) Resource allocation method, master device, slave device and distributed computing system
WO2016078008A1 (en) Method and apparatus for scheduling data flow task
WO2015130262A1 (en) Multiple pools in a multi-core system
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
CN106569887B (en) Fine-grained task scheduling method in cloud environment
US20130152100A1 (en) Method to guarantee real time processing of soft real-time operating system
US11068308B2 (en) Thread scheduling for multithreaded data processing environments
US11301304B2 (en) Method and apparatus for managing kernel services in multi-core system
US11301255B2 (en) Method, apparatus, device, and storage medium for performing processing task
EP2840513A1 (en) Dynamic task prioritization for in-memory databases
CN106598706B (en) Method and device for improving stability of server and server
CN116048756A (en) Queue scheduling method and device and related equipment
CN113296957B (en) Method and device for dynamically distributing network bandwidth on chip
US20140181822A1 (en) Fragmented Channels
CN110532099B (en) Resource isolation method and apparatus, electronic device, and medium
Duy et al. Enhanced virtual release advancing algorithm for real-time task scheduling
US9977751B1 (en) Method and apparatus for arbitrating access to shared resources
US20230305887A1 (en) Processing engine scheduling for time-space partitioned processing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant