CN113127185A - Task execution queue processing method and device, storage medium and electronic equipment - Google Patents

Task execution queue processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113127185A
CN113127185A CN201911406445.9A CN201911406445A CN113127185A CN 113127185 A CN113127185 A CN 113127185A CN 201911406445 A CN201911406445 A CN 201911406445A CN 113127185 A CN113127185 A CN 113127185A
Authority
CN
China
Prior art keywords
task
executed
execution
execution queue
task execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911406445.9A
Other languages
Chinese (zh)
Other versions
CN113127185B (en
Inventor
费伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yiyiyun Technology Co ltd
Original Assignee
Beijing Yiyiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yiyiyun Technology Co ltd filed Critical Beijing Yiyiyun Technology Co ltd
Priority to CN201911406445.9A priority Critical patent/CN113127185B/en
Publication of CN113127185A publication Critical patent/CN113127185A/en
Application granted granted Critical
Publication of CN113127185B publication Critical patent/CN113127185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a method and a device for processing a task execution queue, a storage medium and electronic equipment, which relate to the technical field of computers, and the method comprises the following steps: receiving a task execution request comprising a task to be executed, and submitting the task to be executed to a task execution queue corresponding to the task to be executed in response to the task execution request; executing the task to be executed in the task execution queue, and judging whether the current execution time of the task to be executed exceeds the termination execution time of the task to be executed; and deleting the task execution queue when the current execution time is determined to exceed the termination execution time of the task to be executed. The invention improves the utilization rate of the task execution queue, and further improves the resource utilization rate.

Description

Task execution queue processing method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a task execution queue processing method, a task execution queue processing device, a computer readable storage medium and electronic equipment.
Background
In the big data era, data analysis and data mining become more and more frequent, people also more and more walk into life, and more are applied in each link of society, the country has a big data center, enterprises also have a big data center, various cloud computing is newly built like the spring bamboo shoots after rain, in the face of different computing requirements, resource coordination needs to be carried out, the distribution according to industry, departments and people and according to requirements is realized, the value of computing resources can be maximally exerted, and computing resources are limited after all.
In the existing computing resource scheduling system, three default resource schedulers are provided: the system comprises a first-in-first-out (FIFO), a capacity scheduler and a fair scheduler, wherein the three schedulers are classified according to a preset proportion, such as 30% for a department A, 60% for a department B and 10% for a department C. However, more than one coordination mechanism among the departments under the fixed resource allocation is, for example, a fairness principle or a first-come first-serve principle.
However, the above solution has the following drawbacks: the fixed resource proportion configuration greatly limits the flexibility of resource allocation, and particularly causes the problem of great waste of computing resources when the production task time is not fixed.
Therefore, it is desirable to provide a new task execution queue processing method.
It is to be noted that the information invented in the above background section is only for enhancing the understanding of the background of the present invention, and therefore, may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present invention is directed to a task execution queue processing method, a task execution queue processing apparatus, a computer-readable storage medium, and an electronic device, which overcome at least some of the problems of computation resource waste due to the limitations and disadvantages of the related art.
According to an aspect of the present disclosure, there is provided a task execution queue processing method including:
receiving a task execution request comprising a task to be executed, and submitting the task to be executed to a task execution queue corresponding to the task to be executed in response to the task execution request;
executing the task to be executed in the task execution queue, and judging whether the current execution time of the task to be executed exceeds the termination execution time of the task to be executed;
and deleting the task execution queue when the current execution time is determined to exceed the termination execution time of the task to be executed.
In an exemplary embodiment of the present disclosure, deleting the task execution queue includes:
judging whether the tasks to be executed in the task execution queue are finished executing or not;
and if the task to be executed in the task execution queue is completed, deleting the task execution queue.
In an exemplary embodiment of the present disclosure, the task execution queue processing method further includes:
and if the tasks to be executed in the task execution queue are not finished, moving the unfinished tasks to be executed to a default queue, and deleting the task execution queue.
In an exemplary embodiment of the present disclosure, before receiving a task execution request including a task to be executed sent by a user, the task execution queue processing method further includes:
receiving a resource reservation request comprising the initial execution time, the ending execution time and the target resource amount of the task to be executed; wherein the target resource amount is determined based on an available resource amount presented in a preset time axis;
and responding to the resource reservation request, and creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
In an exemplary embodiment of the present disclosure, after deleting the task execution queue, the task execution queue processing method further includes:
and initializing the available resource amount presented in the preset time axis to update the available resource amount.
In an exemplary embodiment of the present disclosure, creating a task execution queue corresponding to the to-be-executed task according to the start execution time, the end execution time, and the target resource amount includes:
refreshing the historical configuration file to obtain a current configuration file;
modifying the current configuration file according to the initial execution time, the final execution time and the target resource amount;
generating a task execution queue corresponding to the task to be executed based on the modified current configuration file; and naming the task execution queue by the task name of the task to be executed.
In an exemplary embodiment of the present disclosure, creating a task execution queue corresponding to the to-be-executed task according to the start execution time, the end execution time, and the target resource amount includes:
judging whether a sender of the resource reservation request completes user authentication;
and if the sender completes the user authentication, creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
According to an aspect of the present disclosure, there is provided a task execution queue processing apparatus including:
the to-be-executed task submitting module is used for receiving a task execution request comprising a to-be-executed task and submitting the to-be-executed task to a task execution queue corresponding to the to-be-executed task in response to the task execution request;
a time-termination judging module, configured to execute the to-be-executed task in the task execution queue, and judge whether a current execution time of the to-be-executed task exceeds a time-termination execution time of the to-be-executed task;
and the task execution queue deleting module is used for deleting the task execution queue when the current execution time of the task to be executed is determined to exceed the termination execution time of the task to be executed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the task execution queue processing method of any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the above task execution queue processing methods via execution of the executable instructions.
In the method for processing the task execution queue provided by the embodiment of the invention, on one hand, the task to be executed is submitted to the task execution queue corresponding to the task to be executed; then executing the task to be executed in the task execution queue, and judging whether the current execution time of the task to be executed exceeds the termination execution time of the task to be executed; finally, when the current execution time is determined to exceed the execution termination time, the task execution queue is deleted, so that the problem that the flexibility of resource allocation is greatly limited due to the fact that fixed resource proportion configuration is required in the prior art, especially when the time of production tasks is not fixed, a large amount of computing resources are wasted is solved, the utilization rate of the task execution queue is improved, and further the resource utilization rate is improved; on the other hand, the tasks to be executed are submitted to the task execution queue corresponding to the tasks to be executed, and the tasks to be executed are executed in the task execution queue, so that the problem that the tasks to be executed cannot be executed due to the fact that resources required by the tasks to be executed are larger than the rest resources is solved, and the execution efficiency of the tasks to be executed is improved; on the other hand, the task execution queue is deleted when the current time is determined to exceed the execution termination time, so that the problem of resource waste caused by the fact that the task execution queue for completing task execution cannot be deleted in the prior art is solved, and the task execution queue is automatically deleted after the current execution time exceeds the execution termination time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a flowchart of a task execution queue processing method according to an exemplary embodiment of the present invention.
Fig. 2 is a diagram schematically showing an example of the structure of a universal resource management system according to an example embodiment of the present invention.
Fig. 3 schematically shows a flowchart of another task execution queue processing method according to an exemplary embodiment of the present invention.
Fig. 4 schematically shows an example diagram of a visualization timeline according to an example embodiment of the present invention.
Fig. 5 schematically shows an example diagram of another visualization timeline according to an example embodiment of the present invention.
Fig. 6 is a flowchart schematically illustrating a method for creating a task execution queue corresponding to the task to be executed according to the starting execution time, the ending execution time, and the target resource amount, according to an exemplary embodiment of the present invention.
Fig. 7 schematically shows a flowchart of another task execution queue processing method according to an exemplary embodiment of the present invention.
Fig. 8 is a block diagram schematically illustrating a task execution queue processing apparatus according to an exemplary embodiment of the present invention.
Fig. 9 schematically illustrates an electronic device for implementing the task execution queue processing method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The fixed resource proportion configuration greatly limits the flexibility of resource allocation, and especially when the production task time is not fixed and the priority of the production task needs to be coordinated, great manpower is needed to solve the resource coordination problem, for example, a user A needs 100GB memory and 20% of CPU computing resources between 9 and 10 points, a user B needs 50GB memory and 5% of CPU computing resources between 10 and 20 points, and the management mechanism provided by the current resource scheduling system cannot solve the problem unless the corresponding configuration file is manually modified every time and then the service configuration is refreshed. How to enable a user to apply for own resources in advance like a predetermined conference room and acquire a desired quota resource within the application time is very important. In addition, various schedulers of the existing computing resource scheduling system cannot support dynamically adding queues in terms of queue management, and if the queues are to be added, the services can be restarted only by modifying configuration parameters.
The example embodiment first provides a task execution queue processing method, which may be executed in a server, a server cluster, a cloud server, or the like, where the server may be tomcat, for example, and may be used to deploy and develop a back-end service of a user order management system; of course, those skilled in the art may also operate the method of the present invention on other platforms as needed, and this is not particularly limited in this exemplary embodiment. Referring to fig. 1, the task queue executing method may include the steps of:
step 110, receiving a task execution request including a task to be executed, and submitting the task to be executed to a task execution queue corresponding to the task to be executed in response to the task execution request.
And S120, executing the task to be executed in the task execution queue, and judging whether the current time exceeds the execution termination time of the task to be executed.
And S130, deleting the task execution queue when the current time is determined to exceed the termination execution time of the task to be executed.
In the task execution queue processing method, on one hand, the tasks to be executed are submitted to the task execution queue corresponding to the tasks to be executed; then executing the task to be executed in the task execution queue, and judging whether the current execution time of the task to be executed exceeds the termination execution time of the task to be executed; finally, when the current execution time is determined to exceed the execution termination time, the task execution queue is deleted, so that the problem that the flexibility of resource allocation is greatly limited due to the fact that fixed resource proportion configuration is required in the prior art, especially when the time of production tasks is not fixed, a large amount of computing resources are wasted is solved, the utilization rate of the task execution queue is improved, and further the resource utilization rate is improved; on the other hand, the tasks to be executed are submitted to the task execution queue corresponding to the tasks to be executed, and the tasks to be executed are executed in the task execution queue, so that the problem that the tasks to be executed cannot be executed due to the fact that resources required by the tasks to be executed are larger than the rest resources is solved, and the execution efficiency of the tasks to be executed is improved; on the other hand, the task execution queue is deleted when the current time is determined to exceed the execution termination time, so that the problem of resource waste caused by the fact that the task execution queue for completing task execution cannot be deleted in the prior art is solved, and the task execution queue is automatically deleted after the current execution time exceeds the execution termination time.
Hereinafter, each step involved in the task execution queue processing method according to the exemplary embodiment of the present invention will be explained and explained in detail with reference to the drawings.
First, terms related to exemplary embodiments of the present invention are explained and explained.
Yarn (Another Resource coordinator) is a universal Resource management system and is responsible for Resource allocation and task scheduling of Yarn cluster, and Yarn mainly comprises 3 components: RM (Resource Manager), NM (Node Manager), and AM (Application Master).
In Yarn, the unit of representation of the resource is a Container (Container), which is a part decomposed from MRv 1. The Container is an abstraction of the resources in the Yarn, and encapsulates a certain amount of computing resources and storage resources, such as a CPU and a memory, on a certain node. The AM applies for resources from the RM, the scheduler in the RM allocates the Container to the AM, and the AM informs the NM of starting the Container to execute tasks after receiving the Container. Specifically, each job (job) comprises a plurality of tasks (tasks), the AM applies for resources for each task, the RM is responsible for allocating resources for each task, the NM is responsible for running and managing the containers, and each task is run by one Container.
Specifically, referring to fig. 2, RM (Resource Manager) 201, NM (Node Manager) 202, AM (Application Master) 203, and client 204 may be communicatively connected through a remote procedure call protocol. Each node in the system may include a node manager and one or more application managers. The client is used to submit jobs to the AM. A plurality of tasks to be performed are included in the job.
The RM is responsible for resource management and scheduling of the entire system. According to the resource request of the AM, resources can be allocated to each task in the job, and the resource allocation result is fed back to the AM. NM is responsible for resource management of this node. And the NM is used for executing the task corresponding to the container after the AM of the node obtains the container of each task, and isolating the network bandwidth of each task. And the NM regularly reports the resource use condition of the node and the running state of each container in the node to the RM.
HDFS (Hadoop distributed File System): a distributed file storage system can provide multi-copy high-availability file storage service.
HADOOP: a general term of a distributed storage and distributed computing scheduling system comprises two subsystems of an HDFS (Hadoop distributed file system) and a Yarn.
GB: a computer data scale is 1GB equal to 1024MB, 1MB equal to 1024KB, 1KB equal to 1024 bits, 1bit equal to one byte.
In step S110, a task execution request including a task to be executed is received, and the task to be executed is submitted to a task execution queue corresponding to the task to be executed in response to the task execution request.
With continued reference to fig. 2, when the application manager 203 in Yarn receives a task execution request including a task to be executed, which is sent by the client 204, the task to be executed may be submitted to a task execution queue corresponding to the task to be executed in response to the task execution request. The task execution queue is pre-created according to the initial execution time and the final execution time of the task to be executed and the target resource amount required by executing the task to be executed. Therefore, the task to be executed is submitted to the task execution queue corresponding to the task to be executed, so that the problem that the task to be executed cannot be executed due to the fact that resources required by the task to be executed are larger than the residual resources or the problem of resource waste caused by the fact that the amount of the resources is too large can be avoided, and the execution efficiency of the task to be executed is improved.
In step S120, the to-be-executed task is executed in the task execution queue, and it is determined whether the current execution time of the to-be-executed task exceeds the termination execution time of the to-be-executed task.
In this exemplary embodiment, during the process of executing the task to be executed, it may be determined in real time whether the current execution time exceeds the termination execution time of the task to be executed. If the current execution time exceeds the termination execution time, continuing to execute the task to be executed; once the termination execution time is exceeded, execution of the task to be executed needs to be stopped. By the method, the problem that other users cannot execute other tasks to be executed in time due to resource occupation caused by the fact that the execution time is continued to be executed after exceeding the termination execution time can be solved.
In step S130, when it is determined that the current time exceeds the termination execution time of the task to be executed, the task execution queue is deleted.
In this example embodiment, upon determining that the current time exceeds the termination execution time of the task to be executed, the task execution queue may be deleted. By the method, the problem of resource waste caused by the fact that the task execution queue which is executed by the task cannot be deleted in the prior art is solved, and the task execution queue is automatically deleted after the current execution time exceeds the execution termination time.
Further, in order to avoid a problem that when the task execution queue is deleted, there is still an unexecuted to-be-executed task in the task execution queue, and further the uncompleted to-be-executed task becomes an orphan task, the deleting the task execution queue may further include: judging whether the tasks to be executed in the task execution queue are finished executing or not; and if the task to be executed in the task execution queue is completed, deleting the task execution queue. Of course, if the tasks to be executed in the task execution queue are not finished, the unfinished tasks to be executed are moved to a default queue, and the task execution queue is deleted.
Specifically, when the user completes the predetermined time and the computing task is not completed, the background system will recycle the computing resources of the queue, and the task of the user will return to a resource pool common to all users, where the resource pool does not guarantee the user's share, and is used for performing fair competition or resource acquisition according to a first-come-first-obtained mechanism.
Fig. 3 schematically illustrates another task execution queue processing method according to an exemplary embodiment of the present invention. Referring to fig. 3, the task execution queue processing method may further include step S310 and step S320, which will be described in detail below.
In step S310, a resource reservation request including a start execution time, a termination execution time, and a target resource amount of the task to be executed is received; wherein the target resource amount is determined based on an available resource amount presented in a preset time axis.
In the present exemplary embodiment, first, the above-described preset time axis is explained and explained. The preset timeline may be, for example, a visual timeline that may be managed by a timeline management cluster, which may apply for computing resources. Furthermore, a part of the computing resources of the cluster are fixedly distributed to tasks of timing, conventional data analysis and data mining; meanwhile, a part of computing resources are also reserved for temporary production tasks, especially for some project data production tasks. Then, the amount of computing resources in this portion may be managed according to a visual timeline, through which the user clearly sees the amount of computing resources that can be applied by the cluster at all times in the future, which may be specifically referred to as fig. 4.
Further, after the user selects a time period, the computing resources (generating the resource reservation request) in a certain time period can be reserved according to the target resource amount required by the task to be executed and the maximum application amount of the visual time axis as an upper limit and the value larger than 0 as a lower limit, so that the method is as convenient as reserving a conference room.
In step S320, a task execution queue corresponding to the to-be-executed task is created according to the starting execution time, the ending execution time, and the target resource amount in response to the resource reservation request.
In this exemplary embodiment, after receiving a resource reservation request submitted by a client, a task execution queue corresponding to the task to be executed may be created according to the starting execution time, the ending execution time, and the target resource amount in response to the resource reservation request. It should be added that, after receiving the resource reservation request, the resource in the time period (the time period of the initial execution time and the time period of the ending execution time) is locked, and when other users enter the system, the amount of the resource that can be reserved is correspondingly reduced, which can be specifically shown in fig. 5.
Further, in order to improve the security of Yarn, user authentication is also required for the sender of the resource reservation request. The method specifically comprises the following steps: firstly, judging whether a sender of the resource reservation request completes user authentication or not; and if the sender completes the user authentication, creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
Specifically, the user information management may be performed on a user (a sender of the resource reservation request, for example, the client 204) of the enterprise through an ldap (Light Directory Access protocol) service, and then the computing resource application system may perform user authentication based on the user system, so as to achieve the purpose of uniform user management. It should be noted that, if the user authentication is not completed, the user authentication may be performed on the sender first, and the task execution queue is created after the user authentication is completed; and after the authentication is completed, the information of the authentication completion can be sent to the sender. Of course, the sender may also send the authentication request for authentication before sending the resource reservation request, which is not limited in this example.
It should be further added that, after the task execution queue is deleted, the available resource amount presented in the preset time axis can be initialized in time to update the available resource amount. Other users can generate new resource reservation requests in time according to the updated available resource amount, and user experience is improved.
Fig. 6 is a flowchart schematically illustrating a method for creating a task execution queue corresponding to the task to be executed according to the starting execution time, the ending execution time, and the target resource amount, according to an exemplary embodiment of the present invention. Referring to fig. 6, creating a task execution queue corresponding to the to-be-executed task according to the start execution time, the end execution time, and the target resource amount may include steps S610 to S630, which will be described in detail below.
In step S610, the historical configuration file is refreshed to obtain the current configuration file.
In step S620, the current configuration file is modified according to the starting execution time, the ending execution time, and the target resource amount.
In step S630, based on the modified current configuration file, a task execution queue corresponding to the task to be executed is generated; and naming the task execution queue by the task name of the task to be executed.
Hereinafter, steps S610 to S630 will be explained and explained. Firstly, for a resource reservation application order (resource reservation request) submitted by a user, including the amount of computing resources needed in a certain time period, an order background processing system will refresh a corresponding configuration file at regular time, generate a new queue, configure a corresponding resource amount, and limit that only the user can submit a task to the queue. Furthermore, the generation of a new queue only needs to be modified in the configuration file, but the Yarn source code needs to be modified, the related functions are supported, and the refreshing and re-reading of the newly added queue data in the configuration file can be supported (the current function can only modify the queue resource configuration, add the queue, and cannot delete the queue).
For example, taking a capacity scheduler as an example, when the scheduler is initialized, the method is as follows: initializeQueues performs resource configuration initialization of the queue, and the method calls a parseQueue method to operate, and the method utilizes two core variables oldqueue and newqueue. Wherein:
oldqueue: is a variable before loading the resource configuration file to record the current real-time queue information and the running task.
Newqueue: and after the configuration file is reloaded, all queue resource configuration information is obtained. Only in the process of loading a new queue, related tasks are automatically moved to default queues for queues which do not exist, so that the tasks in the queues become queue-free isolated tasks when the queues are deleted.
In addition, configuration is automatically generated after the order is generated; after the point arrives, the deletion is needed; moving the deleted file to a default queue; therefore, only one movetodefault method needs to be added, and all tasks losing the queue call the method to operate. It should be added that, the task execution queue may be named by the task name of the task to be executed, and due to the uniqueness of the task name, the task to be executed may be accurately submitted to the corresponding task execution queue according to the task name of the task to be executed for execution.
Hereinafter, a task execution queue processing method according to an exemplary embodiment of the present invention will be further explained and explained with reference to fig. 7. Referring to fig. 7, the task execution queue processing method may include the steps of:
step S710, providing a visual time axis for a user, so that the user generates a resource reservation request according to the amount of computing resources which can be applied and can be provided by the cluster in the future time;
step S720, receiving the resource reservation request of the user, generating a new task execution queue according to the time period and the calculation resource amount included in the resource reservation request, configuring the corresponding resource amount, and limiting that only the user can submit the task to the queue.
Step S730, receiving a task execution request of a user, submitting a task to be executed to a task execution queue, and executing the task to be executed in the task execution queue;
step S740, when the current execution time exceeds the cut-off execution time, judging whether the task to be executed in the task execution queue is completed; if the task execution queue is finished, directly deleting the task execution queue; and if not, moving the tasks to be executed which are not finished to a default queue, and deleting the task execution queue.
The task queue execution method provided by the exemplary embodiment of the present invention has at least the following advantages:
on one hand, the method provides a visual cluster resource pre-application system, so that a user can visually see how many resources are available in a future cluster, then the application is carried out according to a production plan of the user, and meanwhile, a background system automatically generates a corresponding resource queue and provides the resource queue for the user.
On the other hand, the traditional resource allocation is manually and fixedly allocating computing resources, the system provides a flexible and automatic cluster resource allocation modification mechanism, the operation implementation cost is greatly reduced, and manual intervention is completely not needed.
On the other hand, compared with the existing computing resources applying for fixed time, the method needs multi-part coordination, then initiates mail approval, consumes huge manpower, and often reduces the manpower cost to 0 by computing the time cost of one-time coordination according to the hour level or even the day level, and almost does not need human intervention.
The embodiment of the invention also provides a task execution queue processing device. Referring to fig. 8, the task execution queue processing apparatus may include a pending task submission module 810, a termination execution time determination module 820, and a task execution queue deletion module 830. Wherein:
the to-be-executed task submitting module 810 may be configured to receive a task execution request including a to-be-executed task, and submit the to-be-executed task to a task execution queue corresponding to the to-be-executed task in response to the task execution request.
The execution termination time determining module 820 may be configured to determine whether the current time exceeds the execution termination time of the task to be executed.
The task execution queue deleting module 830 may be configured to delete the task execution queue when it is determined that the current time exceeds the termination execution time of the task to be executed.
In an exemplary embodiment of the present disclosure, deleting the task execution queue includes:
judging whether the tasks to be executed in the task execution queue are finished executing or not;
and if the task to be executed in the task execution queue is completed, deleting the task execution queue.
In an exemplary embodiment of the present disclosure, the task execution queue processing apparatus may further include:
the to-be-executed task moving module may be configured to, if the to-be-executed task in the task execution queue is not completed, move the uncompleted to-be-executed task to a default queue, and delete the task execution queue.
In an exemplary embodiment of the present disclosure, the task execution queue processing apparatus may further include:
a resource reservation request receiving module, configured to receive a resource reservation request including a start execution time, a termination execution time, and a target resource amount of the task to be executed; wherein the target resource amount is determined based on an available resource amount presented in a preset time axis;
and the task execution queue creating module may be configured to create, in response to the resource reservation request, a task execution queue corresponding to the task to be executed according to the starting execution time, the ending execution time, and the target resource amount.
In an exemplary embodiment of the present disclosure, the task execution queue processing apparatus may further include:
the initialization processing module may be configured to perform initialization processing on the available resource amount presented in the preset time axis, so as to update the available resource amount.
In an exemplary embodiment of the present disclosure, creating a task execution queue corresponding to the to-be-executed task according to the start execution time, the end execution time, and the target resource amount includes:
refreshing the historical configuration file to obtain a current configuration file;
modifying the current configuration file according to the initial execution time, the final execution time and the target resource amount;
generating a task execution queue corresponding to the task to be executed based on the modified current configuration file; and naming the task execution queue by the task name of the task to be executed.
In an exemplary embodiment of the present disclosure, creating a task execution queue corresponding to the to-be-executed task according to the start execution time, the end execution time, and the target resource amount includes:
judging whether a sender of the resource reservation request completes user authentication;
and if the sender completes the user authentication, creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
The specific details of each module in the task execution queue processing apparatus have been described in detail in the corresponding task execution queue processing method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 900 according to this embodiment of the invention is described below with reference to fig. 9. The electronic device 900 shown in fig. 9 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processing unit 910, the at least one storage unit 920, a bus 930 connecting different system components (including the storage unit 920 and the processing unit 910), and a display unit 940.
Wherein the storage unit stores program code that is executable by the processing unit 910 to cause the processing unit 910 to perform steps according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of the present specification. For example, the processing unit 910 may execute step S110 as shown in fig. 1: receiving a task execution request comprising a task to be executed, and submitting the task to be executed to a task execution queue corresponding to the task to be executed in response to the task execution request; step S120: executing the task to be executed in the task execution queue, and judging whether the current time exceeds the termination execution time of the task to be executed; step S130: and deleting the task execution queue when the current time is determined to exceed the termination execution time of the task to be executed.
The storage unit 920 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM)9201 and/or a cache memory unit 9202, and may further include a read only memory unit (ROM) 9203.
Storage unit 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. As shown, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A task execution queue processing method is characterized by comprising the following steps:
receiving a task execution request comprising a task to be executed, and submitting the task to be executed to a task execution queue corresponding to the task to be executed in response to the task execution request;
executing the task to be executed in the task execution queue, and judging whether the current execution time of the task to be executed exceeds the termination execution time of the task to be executed;
and deleting the task execution queue when the current execution time is determined to exceed the termination execution time of the task to be executed.
2. The method according to claim 1, wherein deleting the task execution queue comprises:
judging whether the tasks to be executed in the task execution queue are finished executing or not;
and if the task to be executed in the task execution queue is completed, deleting the task execution queue.
3. The task execution queue processing method according to claim 2, further comprising:
and if the tasks to be executed in the task execution queue are not finished, moving the unfinished tasks to be executed to a default queue, and deleting the task execution queue.
4. The task execution queue processing method according to claim 1, wherein before receiving a task execution request including a task to be executed sent by a user, the task execution queue processing method further comprises:
receiving a resource reservation request comprising the initial execution time, the ending execution time and the target resource amount of the task to be executed; wherein the target resource amount is determined based on an available resource amount presented in a preset time axis;
and responding to the resource reservation request, and creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
5. The method according to claim 4, wherein after the task execution queue is deleted, the method further comprises:
and initializing the available resource amount presented in the preset time axis to update the available resource amount.
6. The method according to claim 4, wherein creating the task execution queue corresponding to the task to be executed according to the start execution time, the end execution time, and the target resource amount comprises:
refreshing the historical configuration file to obtain a current configuration file;
modifying the current configuration file according to the initial execution time, the final execution time and the target resource amount;
generating a task execution queue corresponding to the task to be executed based on the modified current configuration file; and naming the task execution queue by the task name of the task to be executed.
7. The method according to claim 4, wherein creating the task execution queue corresponding to the task to be executed according to the start execution time, the end execution time, and the target resource amount comprises:
judging whether a sender of the resource reservation request completes user authentication;
and if the sender completes the user authentication, creating a task execution queue corresponding to the task to be executed according to the initial execution time, the termination execution time and the target resource amount.
8. A task execution queue processing apparatus, comprising:
the to-be-executed task submitting module is used for receiving a task execution request comprising a to-be-executed task and submitting the to-be-executed task to a task execution queue corresponding to the to-be-executed task in response to the task execution request;
a time-termination judging module, configured to execute the to-be-executed task in the task execution queue, and judge whether a current execution time of the to-be-executed task exceeds a time-termination execution time of the to-be-executed task;
and the task execution queue deleting module is used for deleting the task execution queue when the current execution time of the task to be executed is determined to exceed the termination execution time of the task to be executed.
9. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the task execution queue processing method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the task execution queue processing method of any of claims 1-7 via execution of the executable instructions.
CN201911406445.9A 2019-12-31 2019-12-31 Task execution queue processing method and device, storage medium and electronic equipment Active CN113127185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911406445.9A CN113127185B (en) 2019-12-31 2019-12-31 Task execution queue processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911406445.9A CN113127185B (en) 2019-12-31 2019-12-31 Task execution queue processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113127185A true CN113127185A (en) 2021-07-16
CN113127185B CN113127185B (en) 2023-11-10

Family

ID=76768854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911406445.9A Active CN113127185B (en) 2019-12-31 2019-12-31 Task execution queue processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113127185B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168302A (en) * 2021-12-28 2022-03-11 中国建设银行股份有限公司 Task scheduling method, device, equipment and storage medium
CN116431318A (en) * 2023-06-13 2023-07-14 云账户技术(天津)有限公司 Timing task processing method and device, electronic equipment and storage medium
CN116679878A (en) * 2023-05-31 2023-09-01 珠海妙存科技有限公司 Flash memory data processing method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347186A1 (en) * 2014-05-29 2015-12-03 Netapp, Inc. Method and system for scheduling repetitive tasks in o(1)
CN105159782A (en) * 2015-08-28 2015-12-16 北京百度网讯科技有限公司 Cloud host based method and apparatus for allocating resources to orders
WO2017152797A1 (en) * 2016-03-07 2017-09-14 中兴通讯股份有限公司 Method and device for resource reservation
CN108345501A (en) * 2017-01-24 2018-07-31 全球能源互联网研究院 A kind of distributed resource scheduling method and system
CN108897854A (en) * 2018-06-29 2018-11-27 北京京东金融科技控股有限公司 A kind of monitoring method and device of overtime task
CN109524070A (en) * 2018-11-12 2019-03-26 北京懿医云科技有限公司 Data processing method and device, electronic equipment, storage medium
CN109684092A (en) * 2018-12-24 2019-04-26 新华三大数据技术有限公司 Resource allocation methods and device
CN110069335A (en) * 2019-05-07 2019-07-30 江苏满运软件科技有限公司 Task processing system, method, computer equipment and storage medium
US20190294474A1 (en) * 2018-03-26 2019-09-26 Ca, Inc. Predictive queue map for parallel computing resource management
CN110297711A (en) * 2019-05-16 2019-10-01 平安科技(深圳)有限公司 Batch data processing method, device, computer equipment and storage medium
CN110362392A (en) * 2019-07-15 2019-10-22 深圳乐信软件技术有限公司 A kind of ETL method for scheduling task, system, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347186A1 (en) * 2014-05-29 2015-12-03 Netapp, Inc. Method and system for scheduling repetitive tasks in o(1)
CN105159782A (en) * 2015-08-28 2015-12-16 北京百度网讯科技有限公司 Cloud host based method and apparatus for allocating resources to orders
WO2017152797A1 (en) * 2016-03-07 2017-09-14 中兴通讯股份有限公司 Method and device for resource reservation
CN108345501A (en) * 2017-01-24 2018-07-31 全球能源互联网研究院 A kind of distributed resource scheduling method and system
US20190294474A1 (en) * 2018-03-26 2019-09-26 Ca, Inc. Predictive queue map for parallel computing resource management
CN108897854A (en) * 2018-06-29 2018-11-27 北京京东金融科技控股有限公司 A kind of monitoring method and device of overtime task
CN109524070A (en) * 2018-11-12 2019-03-26 北京懿医云科技有限公司 Data processing method and device, electronic equipment, storage medium
CN109684092A (en) * 2018-12-24 2019-04-26 新华三大数据技术有限公司 Resource allocation methods and device
CN110069335A (en) * 2019-05-07 2019-07-30 江苏满运软件科技有限公司 Task processing system, method, computer equipment and storage medium
CN110297711A (en) * 2019-05-16 2019-10-01 平安科技(深圳)有限公司 Batch data processing method, device, computer equipment and storage medium
CN110362392A (en) * 2019-07-15 2019-10-22 深圳乐信软件技术有限公司 A kind of ETL method for scheduling task, system, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NZANYWAYINGOMA FREDERIC等: "Task scheduling and virtual resource optimising in Hadoop YARN-based cloud computing environment", 《INTERNATIONAL JOURNAL OF CLOUD COMPUTING》, vol. 7, no. 02, pages 83 - 102 *
潘佳艺等: "异构Hadoop集群下的负载自适应反馈调度策略", 《计算机工程与科学》, vol. 39, no. 03, pages 413 - 423 *
王荣丽等: "基于优先级权重的Hadoop YARN调度算法", 《吉林大学学报(自然科学版)》, vol. 35, no. 04, pages 443 - 448 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168302A (en) * 2021-12-28 2022-03-11 中国建设银行股份有限公司 Task scheduling method, device, equipment and storage medium
CN116679878A (en) * 2023-05-31 2023-09-01 珠海妙存科技有限公司 Flash memory data processing method and device, electronic equipment and readable storage medium
CN116679878B (en) * 2023-05-31 2024-04-19 珠海妙存科技有限公司 Flash memory data processing method and device, electronic equipment and readable storage medium
CN116431318A (en) * 2023-06-13 2023-07-14 云账户技术(天津)有限公司 Timing task processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113127185B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US11922198B2 (en) Assignment of resources in virtual machine pools
US8756599B2 (en) Task prioritization management in a virtualized environment
CN113127185B (en) Task execution queue processing method and device, storage medium and electronic equipment
US11010195B2 (en) K-tier architecture scheduling
US11150951B2 (en) Releasable resource based preemptive scheduling
US10728169B1 (en) Instance upgrade migration
CN112051993A (en) State machine template generation and task processing method, device, medium and equipment
CN111309448A (en) Container instance creating method and device based on multi-tenant management cluster
CN112507303A (en) Cloud desktop management method, device and system, storage medium and electronic equipment
CN111835679A (en) Tenant resource management method and device under multi-tenant scene
US11656912B1 (en) Enabling conditional computing resource terminations based on forecasted capacity availability
US11573823B2 (en) Parallel execution of applications
US10956228B2 (en) Task management using a virtual node
US11249760B2 (en) Parameter management between programs
CN111124291A (en) Data storage processing method and device of distributed storage system and electronic equipment
WO2022148376A1 (en) Edge time sharing across clusters via dynamic task migration
CN115168040A (en) Job preemption scheduling method, device, equipment and storage medium
US11526437B1 (en) Heap space management
US10884789B2 (en) Process tracking
US20230169077A1 (en) Query resource optimizer
US20240020171A1 (en) Resource and workload scheduling
Upadhyay et al. Scheduler in cloud computing using open source technologies
US20220357996A1 (en) Resource management device, resource management method and program
CN118034867A (en) Task scheduling method, device, equipment, medium and program product
CN113918530A (en) Method and device for realizing distributed lock, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant