Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of a task scheduling processing method according to an embodiment of the present application. Although the present application provides method operational steps or apparatus configurations as illustrated in the following examples or figures, more or fewer operational steps or module configurations may be included in the method or apparatus based on conventional or non-inventive efforts. In the case of steps or structures where there is no logically necessary cause-and-effect relationship, the execution order of the steps or the block structure of the apparatus is not limited to the execution order or the block structure provided in the embodiments of the present application. The described methods or modular structures, as applied to a device or end product in practice, may be executed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) in accordance with the embodiments or with the modular structures illustrated in the figures.
Specifically, as shown in fig. 1, the task scheduling processing method may include:
s1: and mapping the task processing channel information in the resource pool into the channel resource information of the configuration center.
The resource pool described in this embodiment may generally include a channel resource set for implementing job task deployment, and specifically may include a plurality of one or more physical clusters. Or in some embodiments the resource pool may also comprise a virtual resource pool, which may include one or more logical clusters deployed. Of course, the resource pool may also be deployed to include a physical cluster and a logical cluster according to the requirements of the actual distributed task processing scenario. In the application scenario of this embodiment, the resource pool may be deployed to include a plurality of physical clusters that process different task types, and the servers in one physical cluster are disposed in the same machine room.
Fig. 2 is a schematic diagram of a deployed scheduling processing framework in the task scheduling processing method according to the present application. In general, one or more task processing channels may be included in each physical cluster. The task processing channel in this embodiment may be composed of a master scheduling server (Driver) and a plurality of execution servers (instantiators) managed by the master scheduling server (Driver). In this way, a task processing channel in the present application may be considered as a channel resource that can be used for processing a job task, and the size of the resource of the task processing channel, such as the number of execution servers included in the master scheduling server, may reflect the job capability of the task processing channel to some extent. In this example, the task processing channels in one physical cluster may be allocated to the same resource size, that is, when the physical cluster is deployed, each task processing channel is set to have one main scheduling server corresponding to the same number of execution servers. Of course, in some other embodiments of the present application, the task processing channels in the resource pool are configured to be allocated as resources with different sizes according to preset processing requirements.
In a distributed task processing scenario, the individual task processing channel information in the deployed resource pool may be mapped to channel resource information of the configuration center. In the distributed task processing scenario of this embodiment, a configuration center may be set for task scheduling deployment, where the configuration center may be used to record and update channel resource information of a task processing channel in the resource pool, and may also provide a query access interface for a resource library in a scheduling cluster, so that a resource applicant may obtain resource information in the resource pool. In this embodiment, the task processing channel of the resource pool may register its own task processing channel information (resource condition) with the configuration center. After the registration is successful, the configuration center may map the task processing channels of each task processing channel of the resource pool into channel resource information of the configuration center according to the corresponding data format, and the configuration center may store and update the channel resource information in real time.
Generally, when a certain task processing channel is started to be used in a physical cluster, the resource condition of the resource can be registered with the configuration center. In an embodiment of the task scheduling processing method of the present application, the configuration center may further sense whether the task processing channel is still alive and effective through a heartbeat. When the task processing channel fails (such as a lower line or a fault), the channel resource information of the configuration center is correspondingly updated. In the actual task scheduling processing operation process, the occupation usage of the task processing channels in the resource pool is often dynamically changed in real time, and therefore, in an embodiment of the present application, the configuration center may be configured to sense the survival of each task channel in the resource pool in real time and update the occupation of the task processing channels in real time. Therefore, the channel resource information of the configuration center can accurately and reliably reflect the resource condition of the task processing channel in the current resource pool. Therefore, in another embodiment of a task scheduling processing method described in the present application, the configuration center may be configured to include:
acquiring the survival condition of the resource pool task processing channel through heartbeat; and the number of the first and second groups,
and updating the task processing channel information in the resource pool in real time.
The task processing channel information described in this embodiment may include information of resource conditions of the task processing channels, for example, if there is an idle task processing channel P1, the P1 includes a main scheduling server D1 and 10 corresponding execution servers E1 to E10. The channel resource information of the configuration center can reflect which task processing channels in which physical clusters are occupied in the current resource pool, and which task processing channels are idle and can be applied for occupied resource condition information.
In the embodiment of distributed task processing, the task processing channel information in the deployed resource pool can be mapped to the channel resource information of the configuration center. The configuration center may store and update channel resource information.
S2: and when a scheduling request of at least one job task is received, inquiring whether idle channel resources for processing the job task exist in the configuration center.
In the application scenario for implementing distributed task processing, when one or more job tasks need to be processed, the scheduling processing module may acquire channel resource information from the resource library. The resource library described in this embodiment may query and apply for resources from any configuration center, and if a resource application comes, the resource application may be submitted to the corresponding configuration center as a task. As shown in fig. 2, in the present application, a scheduling cluster of multiple resource pools may exist at the same time, that is, in the present application, scheduling requests of multiple job tasks are allowed to be processed simultaneously and in parallel. Specifically, when receiving a scheduling request of one or more job tasks, the resource pool in the scheduling cluster may query the configuration center whether there is a free channel resource for processing the job task currently. The configuration center stores channel resource information of the task processing channel information in the resource pool, and can respond to the query of the scheduling cluster.
When a scheduling request of one or more job tasks is received, whether idle channel resources for processing the job tasks exist or not can be inquired from the configuration center.
S3: and when the query result of the configuration center indicates that the idle channel resources exist, selecting a task processing channel corresponding to the scheduling request from the resource pool according to the resource allocation priority set for the job task and allocating the task processing channel to the corresponding job task.
The configuration center can record the channel resource information of the occupation execution condition of the current task processing channel in the resource pool, and can judge whether the idle channel resource for processing the job task exists according to the resource information applied in the scheduling request of the job task. If the scheduling requests of the current two job tasks T1 and T2 each require a task processing channel in the physical cluster a, the resource pools S1 and S2, which may be responsible for T1 and T2, query the configuration center whether there is a free channel resource for processing the job tasks T1 and T2. And the configuration center inquires whether a free channel resource capable of processing the job tasks T1 and T2 exists according to the recorded channel resource information of all the task processing channels in the resource pool. If the query result of the configuration center is that there are currently 6 idle channel resources of T1 and T2 that can process the job task, the idle task processing channel in the resource pool can be selected to be allocated to the corresponding task.
According to the task scheduling processing method, a plurality of task processing channels can be provided to distribute channel resources required by job tasks in parallel. In some application scenarios of the present application, when a scheduling request of a plurality of job tasks is received, a plurality of resource pools of a scheduling cluster may all query a configuration center whether there is an idle channel resource for processing the job task corresponding to the resource pool. At this time, if the information of the configuration center indicates that there is an idle channel resource in the current resource pool, but the number of resources is less than the resource request amount of the multiple job tasks, this embodiment may set a resource allocation priority for the concurrently submitted distributed tasks in advance according to a certain manner, so as to solve the problem of channel resource allocation when the multiple job tasks concurrently submit application resources. Specifically, the priority of resource allocation can be set when the job task contends for resources in a pre-selected manner according to the task scheduling processing requirement of the distributed task processing.
An embodiment of the present application provides a resource allocation processing method when a distributed task is concurrently submitted. Specifically, in another embodiment of the task scheduling processing method according to the present application, the setting of the resource allocation priority for the job task may include:
s301: when the configuration center is inquired that the idle channel resources exist and the number of the idle channel resources is smaller than the amount of the application resources of the current multiple job tasks, any one of the time sequence of applying the resources by the job tasks in the multiple job tasks and the weight priority of the job tasks is used as the resource allocation priority of the job tasks.
In a specific implementation process, a task scheduling processing method for concurrently submitting distributed tasks and applying for resource jobs can be set according to design requirements. For example, the resource allocation may be performed in the order of the job task application resource. When the number of resources in the resource pool is less than the resource application amount of a plurality of tasks submitted concurrently, the job task which submits the resource application first can preferentially preempt idle task processing channel resources, and the job task which is submitted later fails in resource preemption. Of course, the resource job task is not preempted, and the scheduling can be returned again to continue to apply for the resource. The job task applied to the corresponding task processing channel resource can execute the designated task processing running logic by using the task processing channel.
According to the task scheduling processing method, the main scheduling server for processing the job task and the corresponding execution server can be abstracted into channels in a distributed scene, and the channels can register own channel information to a unified configuration center and map the channel information into channel resource information in the configuration center. When the job task of the job request end needs to use resources, the channel resource information can be obtained from the configuration center, and whether idle channel resources available for the job task exist at present is judged according to the resource information in the resource pool. If so, scheduling resources may be allocated for task job execution. In this way, in the process of task scheduling processing, when a scheduling request of a plurality of job tasks is received, each scheduling task can inquire whether available idle channel resources exist in a configuration center, and then allocate corresponding resources to the plurality of job tasks according to the set job task priority according to the inquiry result of the configuration center, so that the multi-channel parallel processing resource allocation for a plurality of concurrently submitted tasks is realized. In the queuing waiting process of the job tasks without preempting the resources, the configuration center feeds back idle channel resources after updating, and therefore the resources can be rapidly allocated to the job tasks in the queue. The task scheduling processing method can realize multi-channel resource allocation, can allocate task resources to jobs as required, solves the problem that the subsequent tasks are not executable due to channel blockage, provides a resource utilization rate, and improves task scheduling processing efficiency.
As described above, in an embodiment of the present application, the configuration center may store channel resource information reflecting task processing channels in the resource pool. In a specific embodiment, whether the task processing channel is alive or not can be known through heartbeat-aware human processing, for example, the configuration center judges whether the task processing channel is failed or not through survival of a heartbeat-aware primary scheduling server. When a main scheduling server of a certain task processing channel in the resource pool fails, such as an abnormal fault or a resource offline, the configuration center can update the channel resource information in real time. Meanwhile, if there is a job task that has been run or allocated but has not executed the failed task processing channel resource, in an embodiment provided by the present application, the following manner may be adopted for processing, so as to avoid a situation that a single job task is abnormal, causing a task loss or a subsequent task job cannot be processed. Specifically, in another embodiment of a task scheduling processing method provided by the present application, the method may further include:
s401: when the task processing channel in the resource pool is judged to be invalid, updating the channel resource information corresponding to the configuration center;
and the number of the first and second groups,
s402: and processing the job task called to the failed task processing channel according to the scheduling request in the following way:
s4021: interrupting the job task executing the failed task processing channel;
s4022: abandoning the job task of the queued task processing channel which is not executed;
s4023: and rescheduling the job task which is called to the failed task processing channel, and distributing a corresponding task processing channel for the rescheduling job task according to the channel resource information updated by the configuration center.
Fig. 3 is a flowchart illustrating a method according to another embodiment of a task scheduling processing method according to the present application. In a specific execution process, whether a master dispatch server (Driver) is alive or not can be sensed through heartbeat. When the master scheduling server is sensed to be invalid, the corresponding channel resource information of the task processing channel information registered in the configuration center is also invalid. When the channel resource is offline, the interruption of the allowed job task can be directly closed, such as the running logic can be set to be identified as failure. And the number of the workers who are queuing to call the failed or offline task processing channel can be abandoned. Furthermore, the job task called to the failed task processing channel can be rescheduled, for example, the interrupted or abandoned job task can be rescheduled back to the scheduling cluster after delaying the preset time, and the channel resource is reapplied for task processing. Therefore, the processing method provided by the embodiment can not cause the problem of job task execution loss when the resources are off-line in the application scene of distributed task processing, and meanwhile, if one task channel is abnormal in the execution process, the job tasks which are subsequently queued and use the abnormal task processing channel can be executed by selecting other channels. Of course, the job task that is executing the exception handling channel may also be interrupted and another channel reselected to execute the job task.
In another application scenario of the task scheduling processing method, processing time of different job tasks is often different. Fig. 4 is a flowchart illustrating a method according to another embodiment of a task scheduling processing method according to the present application. In the application scenario of distributed task processing, the original reason that some workers have long processing time may be due to own job processing requirements. However, if the processing time of a job task is too long and exceeds the normal execution period, and it can be considered that the job task is abnormal in the processing process at this time, the job task with the too long execution time exceeding the set time threshold should be processed correspondingly. Therefore, in another embodiment of the task scheduling processing method according to the present application, a thread may be started when a job task is executed, so as to monitor an execution time of a task processing channel, and specifically, the method may further include:
s501: and monitoring the occupied execution time of the task processing channel of the job task, and stopping the job task with the occupied time being more than the set maximum execution time threshold when the occupied execution time is more than the set maximum execution time threshold.
The job task that is stopped may be set to exception handling and logged. Furthermore, a thread can be set for scanning the job tasks recorded as exception handling in the job tasks, and then determining whether to continue scheduling the exception handling job tasks according to the preset user configuration. Fig. 5 is a flowchart illustrating a method according to another embodiment of a task scheduling processing method according to the present application. As shown in fig. 5, in another embodiment of a task scheduling processing method provided by the present application, the method may further include:
s502: and scanning and acquiring the job task marked as abnormal processing according to a preset period, and determining whether to continuously call the job task marked as abnormal processing according to the set scheduling processing configuration information.
In a specific implementation process, if the scheduling processing configuration information of the user is set to be negative, the job task occupying the execution time greater than the maximum execution time threshold can be killed, the job task is discarded, and corresponding log records are performed. If the configuration information of the scheduling processing is set to be yes, the job tasks marked as abnormal processing can be rescheduled after a period of time delay according to the setting of the configuration information, channel resources can be reapplied, resources can be allocated, and the like. In the embodiment of the present application, the job task executing the logic running process is killed, and is converted to another channel for continuous processing, so as to free up time and channel resources for processing the subsequent job task. The implementation scheme of the embodiment can greatly improve the utilization rate of channel resources in the resource pool and improve the scheduling processing efficiency of the job task.
In the above-described embodiments, the task processing channels in the resource pool may be set to have the same resource allocation size, and the job processing capability of each task processing channel in one physical cluster in the resource pool is the same. In another embodiment of the task scheduling processing method, task processing channels with different job processing energies and different resource allocation sizes can be set in the same type of physical cluster. The task processing channel in the resource pool may include, in the task processing channel information registering itself with the configuration center, job capability indicating the task processing channel, or may also include other information such as a channel type. In this way, the client of the job request can register the channel type of interest with the configuration center, and acquire all task processing channel information of the channel type. When the job task really starts scheduling processing, the idle channel resource with the strongest current processing capability can be allocated for the job task to process. And updating the channel resource information to the registration center after the processing is finished. Therefore, the present application further provides another embodiment of a task scheduling processing method, where the channel resource information of the configuration center includes information of job processing capability of the task processing channel;
correspondingly, the selecting a task processing channel corresponding to the scheduling request from the resource pool and allocating the task processing channel to the corresponding job task may include: and selecting a task processing channel with the job processing capacity matched with the resource requirement of the scheduling request from the resource pool to distribute to the corresponding job task.
Therefore, resources can be distributed according to the resource requirements of the job task time as required, a task processing channel with high job capacity is distributed for the job task time when the resource requirements are high, and a corresponding task processing channel with low job capacity is distributed when the resource requirements are low, so that the waste of resources caused by the application of the job task occupying less resources is avoided.
Based on the task scheduling processing method, the application also provides a task scheduling processing device. Fig. 6 is a schematic block diagram of an embodiment of a task scheduling processing apparatus provided in the present application, and as shown in fig. 6, the apparatus may include:
a resource pool 101, which can be used for storing the resources of the deployed task processing channels;
the configuration center 102 may be configured to obtain channel resource information mapped by task processing channel information in the resource pool;
the resource query module 103 may be configured to, when receiving a scheduling request of at least one job task, query the configuration center whether there is an idle channel resource for processing the job task;
the scheduling processing module 104 may be configured to, when the query result of the configuration center indicates that the idle channel resource exists, select a task processing channel corresponding to the scheduling request from the resource pool according to the resource allocation priority set for the job task, and allocate the task processing channel to the corresponding job task.
The task scheduling processing device provided by the embodiment of the application can abstract a main scheduling server for processing job tasks and a corresponding execution server into channels in a distributed scene, and a plurality of channels can register channel information of the channels to a unified configuration center and map the channel information into channel resource information in the configuration center. When the job task of the job request end needs to use resources, the channel resource information can be obtained from the configuration center, and whether idle channel resources available for the job task exist at present is judged according to the resource information in the resource pool. If so, scheduling resources may be allocated for task job execution. In this way, in the process of task scheduling processing, when a scheduling request of a plurality of job tasks is received, each scheduling task can inquire whether available idle channel resources exist in a configuration center, and then allocate corresponding resources to the plurality of job tasks according to the set job task priority according to the inquiry result of the configuration center, so that the multi-channel parallel processing resource allocation for a plurality of concurrently submitted tasks is realized. In the queuing waiting process of the job tasks without preempting the resources, the configuration center feeds back idle channel resources after updating, and therefore the resources can be rapidly allocated to the job tasks in the queue. The task scheduling processing device can realize multi-channel resource allocation, can allocate task resources to jobs as required, solves the problem that follow-up tasks are not executable due to channel blockage, provides a resource utilization rate, and improves task scheduling processing efficiency.
As mentioned above, in one embodiment of the apparatus, the human processing channels in the resource pool 101 are enabled and register their own resource status with the configuration center. After the registration is successful, the configuration center 102 may sense the survival condition of the task processing channel through heartbeat. When the task processing channel resources in the resource pool are invalid or off-line, and the information of the applied occupation/release and the like can be synchronously updated to the configuration center. Thus, in an embodiment of the apparatus described herein, the configuration center 102 may be configured to include:
acquiring the survival condition of the resource pool task processing channel through heartbeat; and the number of the first and second groups,
and updating the task processing channel information in the resource pool in real time.
When the task processing channel fails (such as a lower line or a fault), the channel resource information of the configuration center is correspondingly updated. In the actual task scheduling processing operation process, the occupation and use conditions of the task processing channels in the resource pool are often dynamically changed in real time, so the configuration center can be set to sense the survival conditions of each task channel in the resource pool in real time and update the occupation conditions of the task processing channels in real time. Therefore, the channel resource information of the configuration center can accurately and reliably reflect the resource condition of the task processing channel in the current resource pool.
Fig. 7 is a schematic block diagram of an embodiment of a scheduling processing module 104 in the task scheduling processing apparatus provided in the present application. Specifically, as shown in fig. 7, the scheduling processing module 104 may include:
the resource judging module 1041 may be configured to judge, according to an available resource query result of the configuration center, whether the number of idle channel resources in the resource pool meets a requirement of the resource amount applied by the current multiple job tasks;
the priority setting module 1042 may be configured to, when the determination result of the resource determining module 1041 is negative, adopt any one of a time sequence of applying for resources by the job tasks among the plurality of job tasks and a priority of the weights of the job tasks as a resource allocation priority of the job tasks;
the channel resource allocation module 1043 may be configured to, when the resource query result of the configuration center indicates that the idle channel resource exists, select a task processing channel corresponding to the scheduling request from the resource pool according to the resource allocation priority set by the priority setting module 1042 to allocate to the corresponding job task.
In the specific implementation process, the resource allocation can be performed according to the sequence of the application resource of the job task. When the number of resources in the resource pool is less than the resource application amount of a plurality of tasks submitted concurrently, the job task which submits the resource application first can preferentially preempt idle task processing channel resources, and the job task which is submitted later fails in resource preemption. Of course, the resource job task is not preempted, and the scheduling can be returned again to continue to apply for the resource.
Fig. 8 is a schematic block structure diagram of another embodiment of a task scheduling processing apparatus provided in the present application, and as shown in fig. 8, the apparatus may further include:
the resource failure processing module 105 may be configured to update channel resource information corresponding to the configuration center when it is determined that a task processing channel in the resource pool is failed; and processing the job task called to the failed task processing channel according to the scheduling request in the following way:
interrupting the job task executing the failed task processing channel;
abandoning the job task of the queued task processing channel which is not executed;
and rescheduling the job task which is called to the failed task processing channel, and distributing a corresponding task processing channel for the rescheduling job task according to the channel resource information updated by the configuration center.
In a specific execution process, whether a master dispatch server (Driver) is alive or not can be sensed through heartbeat. When the master scheduling server is sensed to be invalid, the corresponding channel resource information of the task processing channel information registered in the configuration center is also invalid. When the channel resource is offline, the interruption of the allowed job task can be directly closed, such as the running logic can be set to be identified as failure. And the number of the workers who are queuing to call the failed or offline task processing channel can be abandoned. Furthermore, the job task called to the failed task processing channel can be rescheduled, for example, the interrupted or abandoned job task can be rescheduled back to the scheduling cluster after delaying the preset time, and the channel resource is reapplied for task processing. Therefore, the processing method provided by the embodiment can not cause the problem of job task execution loss when the resources are off-line in the application scene of distributed task processing, and meanwhile, if one task channel is abnormal in the execution process, the job tasks which are subsequently queued and use the abnormal task processing channel can be executed by selecting other channels.
Fig. 9 is a schematic block structure diagram of another embodiment of a task scheduling processing apparatus provided in this application, and as shown in fig. 9, the apparatus may further include:
the timeout processing module 106 may be configured to monitor an occupied execution time of a task processing channel of the job task, and stop the job task whose occupied time is greater than a set maximum execution time threshold when the occupied execution time is greater than the set maximum execution time threshold.
In an application scenario of distributed task processing, if the processing time of a job task is too long and exceeds a normal execution period, at this time, it can be considered that the job task is abnormal in the processing process, and the job task with the too long execution time exceeding a set time threshold should be processed correspondingly.
Fig. 10 is a schematic block structure diagram of another embodiment of a task scheduling processing apparatus provided in this application, and as shown in fig. 10, the apparatus may further include:
the task restarting module 107 may be configured to scan and obtain the job task marked as abnormal processing according to a preset period, and determine whether to continue to call the job task marked as abnormal processing according to the set scheduling processing configuration information.
Furthermore, a thread can be set for scanning the job tasks recorded as exception handling in the job tasks, and then determining whether to continue scheduling the exception handling job tasks according to the preset user configuration. If the scheduling processing configuration information of the user is set to be negative, the job task occupying the execution time larger than the maximum execution time threshold value can be killed, the job task is discarded, and corresponding log records are carried out. If the configuration information of the scheduling processing is set to be yes, the job tasks marked as abnormal processing can be rescheduled after a period of time delay according to the setting of the configuration information, channel resources can be reapplied, resources can be allocated, and the like. In the embodiment of the present application, the job task executing the logic running process is killed, and is converted to another channel for continuous processing, so as to free up time and channel resources for processing the subsequent job task. The implementation scheme of the embodiment can greatly improve the utilization rate of channel resources in the resource pool and improve the scheduling processing efficiency of the job task.
In another embodiment of the task scheduling processing apparatus provided by the present application, the channel resource information in the configuration center 102 may include information of job processing capability of a task processing channel;
correspondingly, the selecting, by the scheduling processing module 104, a task processing channel corresponding to the scheduling request from the resource pool to allocate to the corresponding job task includes: and selecting a task processing channel with the job processing capacity matched with the resource requirement of the scheduling request from the resource pool to distribute to the corresponding job task.
In this embodiment, the client of the job request may register an interested channel type with the configuration center, and obtain all task processing channel information of the channel type. When the job task really starts scheduling processing, the idle channel resource with the strongest current processing capability can be allocated for the job task to process. And updating the channel resource information to the registration center after the processing is finished. Therefore, resources can be distributed according to the resource requirements of the job task time as required, a task processing channel with high job capacity is distributed for the job task time when the resource requirements are high, and a corresponding task processing channel with low job capacity is distributed when the resource requirements are low, so that the waste of resources caused by the application of the job task occupying less resources is avoided.
The task scheduling processing method and the task scheduling processing device can be used in a service system for processing job tasks in a distributed service scene. The system can comprise one or more physical clusters, a resource information center and a scheduling processing module. Specifically, the application further provides a system for scheduling resources to process a plurality of job tasks in a distributed system application scenario. Specifically, in an embodiment of a distributed task scheduling processing system provided in the present application, the system may include:
the system comprises a resource pool comprising at least one physical cluster, wherein the physical cluster comprises a deployed task processing channel consisting of a main scheduling server and a corresponding execution server;
the configuration center may be configured to map a task processing channel in the resource pool to channel resource information, obtain a survival condition of the task processing channel in the resource pool through heartbeat, and update the task processing channel information in the resource pool in real time.
The scheduling cluster can be used for inquiring whether idle channel resources for processing the job task exist in the configuration center according to the received scheduling request of the job task; and the resource processing module is further configured to select a task processing channel corresponding to the scheduling request from the resource pool according to the resource allocation priority set for the job task to allocate to the corresponding job task when the query result of the configuration center indicates that the idle channel resource exists.
As described above, in another embodiment, the distributed task scheduling processing system may further include:
the resource failure processing unit may be configured to update channel resource information corresponding to the configuration center when it is determined that a task processing channel in the resource pool is failed; and processing the job task called to the failed task processing channel according to the scheduling request in the following way:
interrupting the job task executing the failed task processing channel;
abandoning the job task of the queued task processing channel which is not executed;
and rescheduling the job task which is called to the failed task processing channel, and distributing a corresponding task processing channel for the rescheduling job task according to the channel resource information updated by the configuration center.
In another embodiment, the distributed task scheduling processing system may further include:
the overtime processing module can be used for monitoring the occupied execution time of a task processing channel of the job task and stopping the job task of which the occupied time is greater than the set maximum execution time threshold when the occupied execution time is greater than the set maximum execution time threshold; the method can also be used for scanning and acquiring the job task marked as abnormal processing according to a preset period, and determining whether to continuously call the job task marked as abnormal processing according to the set scheduling processing configuration information.
Specifically, the description and the selection setting manner of the resource allocation priority, the physical cluster, the channel resource failure (offline), the job task execution timeout, and the like in the distributed task scheduling processing system provided in the foregoing embodiment may refer to the foregoing description, and are not described herein again.
The application provides a task scheduling processing method, a task scheduling processing device and a task scheduling processing system, which can solve the problem of channel blockage caused by single job deadlock by adopting a processing mode of configuring multiple channels, can provide parallel submission and parallel processing of multiple job tasks, quickly and efficiently allocate required resources to the job tasks, and greatly improve the task scheduling processing efficiency and the resource utilization rate.
Although the present application refers to the description of resource pool setting, channel resource mapping, Driver survival through heartbeat sensing, resource allocation, resource execution time monitoring by thread, job task restart, and other data setting, information interaction, information monitoring, and the like, the present application is not limited to the case where the data processing standard, the communication interaction identification protocol, or the embodiment is completely met. Some embodiments with slight modifications based on the description of design language, communication identification, protocol or embodiment can also achieve the same, equivalent or similar effect or the expected effect after modification. Of course, even if the data processing and determining modes of the industry standards and protocols are not adopted, the same application can still be realized as long as the resource configuration, information interaction and information determination feedback modes of the embodiments described above are met, and further description is omitted here.
Although the present application provides method steps as described in an embodiment or flowchart, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or system product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The units, devices or modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the present application, the functions of each module may be implemented in the same or multiple software and/or hardware, or the modules implementing the same functions may be implemented by a combination of multiple sub-modules or sub-units, for example, the memory may be divided into a template repository and a tag repository, and the template, the set structure tag, the set variable tag, and the like may be stored separately.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.