CN112130979B - Method, device, terminal and medium for scheduling task and training neural network model - Google Patents

Method, device, terminal and medium for scheduling task and training neural network model Download PDF

Info

Publication number
CN112130979B
CN112130979B CN202011050166.6A CN202011050166A CN112130979B CN 112130979 B CN112130979 B CN 112130979B CN 202011050166 A CN202011050166 A CN 202011050166A CN 112130979 B CN112130979 B CN 112130979B
Authority
CN
China
Prior art keywords
task
constructed
main task
construction
time consumption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011050166.6A
Other languages
Chinese (zh)
Other versions
CN112130979A (en
Inventor
钱民乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202210669734.3A priority Critical patent/CN115061794A/en
Priority to CN202011050166.6A priority patent/CN112130979B/en
Publication of CN112130979A publication Critical patent/CN112130979A/en
Application granted granted Critical
Publication of CN112130979B publication Critical patent/CN112130979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Abstract

The embodiment of the invention provides a method, a device, a terminal and a medium for scheduling tasks and training a neural network model. The task scheduling method comprises the following steps: judging whether the queue of the subtask to be constructed in the main task is changed, if so, executing the following steps; obtaining the construction residual time of each sub-task under construction and each sub-task to be constructed in the main task based on the time-consumption prediction neural network model, and taking the largest one as the construction residual time of the main task; and judging whether the construction residual time consumption of the main task meets the time consumption requirement, if not, adjusting the queuing sequence of the subtasks to be constructed in the main task. Therefore, under the condition that the resources of the slave servers are limited, the queuing sequence of the sub-tasks to be constructed in each main task can be dynamically adjusted, so that the limited resources of the slave servers are reasonably utilized, and the main tasks are guaranteed to be delivered quickly and timely.

Description

Method, device, terminal and medium for scheduling task and training neural network model
Technical Field
The invention relates to the technical field of scheduling tasks, in particular to a method, a device, a terminal and a medium for scheduling tasks and training a neural network model.
Background
Jenkins is an open source software project, is a continuous integration tool developed based on Java, and aims to provide an open and easy-to-use software platform for monitoring continuous and repeated work so as to enable continuous integration of software.
The Jenkins servers are mainly divided into a master server (master server) and a slave server (slave server), wherein the master server is mainly used for distributing construction tasks, and the slave server is mainly used for executing the construction tasks.
Generally, when a project is built, each project comprises a plurality of projects to be built, and the projects can be triggered in a centralized manner and need to be built concurrently. Thus, when Jenkins is used for project construction, tens or even hundreds of slave servers need to be deployed. If the resources from the server are sufficient, the projects can be built simultaneously, and the time consumed for the project with the longest build time is the time consumed for the entire project to be built. However, the slave servers cannot be deployed infinitely, otherwise, the problems of resource waste, low utilization rate and the like are caused.
To ensure that project builds can be delivered quickly and on time, it is often desirable to employ certain scheduling strategies to make reasonable use of these limited slave server resources. However, the conventional resource scheduling policies are generally static, that is, after a build task is started, the corresponding priority, slave server type, and the like are basically determined, and the sequence of each project build is also fixed. If the task with higher priority is queued, or the original queued task is queued, or other conditions affecting the extension of the construction time occur, the construction task can not be completed quickly and on time.
Disclosure of Invention
The technical problem solved by the embodiment of the invention is how to reasonably utilize the slave server resource for constructing the project so as to ensure that the project construction can be delivered quickly and on time.
To solve the foregoing technical problems, embodiments of the present invention provide a method, an apparatus, a terminal, and a medium for scheduling tasks and training a neural network model.
The method for scheduling the task provided by the embodiment of the invention comprises the following steps: s110, judging whether a queue of a subtask to be constructed in the main task is changed, if so, executing the following steps, wherein the change comprises the enqueuing and/or the dequeuing of the subtask in the subtask to be constructed; s120, obtaining the construction residual time of each sub-task under construction and each sub-task to be constructed in the main task based on the time-consumption prediction neural network model, and taking the largest one as the construction residual time of the main task; and S130, judging whether the construction residual time consumption of the main task meets the time consumption requirement, and if not, adjusting the queuing sequence of the subtasks to be constructed in the main task.
Optionally, obtaining the build remaining time of each sub-task being built in the main task based on the time-consuming prediction neural network model comprises: respectively predicting the construction prediction time consumption of each constructed subtask in the main task through a time consumption prediction neural network model; acquiring the constructed time consumption of each constructed subtask in the main task; and respectively taking the difference value of the construction predicted time consumption and the constructed time consumption of each sub task being constructed in the main task as the construction residual time consumption of each sub task being constructed in the main task.
Optionally, obtaining the construction remaining time of each subtask to be constructed in the main task based on the time-consuming prediction neural network model includes: respectively predicting the construction prediction time consumption of each subtask to be constructed in the main task through a time consumption prediction neural network model; acquiring task release time of each sub task being constructed in a slave server suitable for constructing each sub task to be constructed in the main task; and respectively taking the sum of the construction predicted time consumption and the corresponding task release time consumption of each subtask to be constructed in the main task as the construction residual time consumption of each subtask to be constructed in the main task.
Optionally, step S130 includes: judging whether the construction residual time consumption of the main task is less than or equal to the construction available time consumption of the main task, if not, judging whether the queuing level of each subtask to be constructed in the main task reaches the highest level, if not, improving the queuing level of the corresponding subtask to be constructed in the main task, and returning to the step S120; wherein the available elapsed time of the primary task is the difference between the expected completion time of the primary task and the current time.
Alternatively, if the highest level is reached, the building subtasks in the main task that are lower than the queuing level of the main task are aborted.
Optionally, the increasing the queuing level of the corresponding sub-task to be constructed includes: increasing the queuing level of the subtask to be constructed with the largest residual time consumption in the subtasks to be constructed which do not reach the highest level; and/or increasing the queuing level of the subtasks to be constructed, which have residual construction time consumption larger than the construction available time consumption of the main task, in the subtasks to be constructed, and which do not reach the highest level; and/or increasing the queuing level of all sub-tasks to be built that do not reach the highest level.
An embodiment of the present invention further provides a device for scheduling a task, including: the judging module is suitable for judging whether a queue of the subtasks to be constructed in the main task changes and whether the construction residual time consumption of the main task meets the time consumption requirement, wherein the change comprises that the subtasks to be constructed are listed and/or the subtasks are listed; the calculation module is suitable for obtaining the construction residual time of each sub-task under construction and each sub-task to be constructed in the main task based on the time-consuming prediction neural network model when the queue changes, and taking the largest one as the construction residual time of the main task; and the adjusting module is suitable for acquiring the time consumption requirement of the main task and adjusting the queuing sequence of the subtasks to be constructed in the main task when the construction residual time consumption of the main task does not meet the time consumption requirement.
Optionally, the calculation module comprises: the prediction unit is suitable for respectively predicting the construction prediction time consumption of each constructed subtask and each subtask to be constructed in the main task through the time consumption prediction neural network model; the system comprises an acquisition unit, a calculation unit and a processing unit, wherein the acquisition unit is suitable for acquiring constructed time consumption of each constructed subtask in a main task and task release time consumption of each constructed subtask in a slave server which is suitable for constructing each subtask to be constructed in the main task; the computing unit is suitable for respectively taking the difference value of the construction prediction time consumption and the respective constructed time consumption of each sub task under construction in the main task as the construction residual time consumption of each sub task under construction in the main task, respectively taking the construction prediction time consumption and the corresponding task release time consumption of each sub task to be constructed in the main task as the construction residual time consumption of each sub task under construction in the main task, and respectively taking the construction residual time consumption of the maximum construction residual time consumption of each sub task under construction in the main task and each sub task to be constructed as the construction residual time consumption of the main task.
Optionally, the determining module includes: the device comprises a first judging unit, a second judging unit and a third judging unit, wherein the first judging unit is suitable for acquiring a queue of a subtask to be constructed in a main task and judging whether the queue changes or not, and the change comprises that the subtask to be constructed is listed and/or listed; a second judging unit, adapted to judge whether the remaining construction time of the main task is less than or equal to the available construction time of the main task, wherein the available construction time of the main task is a difference between the expected completion time of the main task and the current time; and the third judging unit is suitable for judging whether the queuing level of each subtask to be built in the main task reaches the highest level or not when the construction residual time consumption of the main task is less than or equal to the construction available time consumption of the main task.
Optionally, the adjusting module is adapted to increase the queuing level of the corresponding sub task to be built when the queuing level of the sub task to be built in the main task does not reach the highest level.
Optionally, the adjusting module is adapted to suspend the sub-task being built in the main task lower than the queuing level of the main task when the queuing level of each sub-task to be built in the main task reaches the highest level.
An embodiment of the present invention further provides a terminal for scheduling a task, including: the master server is used for distributing the subtasks to be constructed in the master task to the slave servers; the slave server is used for receiving the subtasks to be constructed distributed by the main server and executing construction on the subtasks; the main server comprises a first memory and a first processor, wherein the first memory stores first computer instructions capable of running on the first processor, and the first processor executes the steps of the method for scheduling tasks provided by the embodiment of the invention when running the first computer instructions.
The embodiment of the present invention further provides a first storage medium, on which a first computer instruction is stored, where the first computer instruction executes the steps of the method for scheduling tasks provided in the embodiment of the present invention when running.
The embodiment of the invention also provides a method for training a time-consuming prediction neural network model, which comprises the following steps: s410, acquiring configuration parameters of each subtask in each main task and performance parameters of a slave server for constructing the corresponding subtask, wherein the configuration parameters and the performance parameters both comprise feature vectors related to construction time consumption of the corresponding subtask; s420, constructing a training sample set based on the configuration parameters and the performance parameters; and S430, training the preset neural network by using the training sample set and taking the construction time consumption of the corresponding subtasks as a target, and obtaining a time consumption prediction neural network model.
Optionally, the feature vector includes file names of files that need to be built in each sub-task included in the configuration parameters.
Optionally, the feature vector comprises a sub-task name of each sub-task included in the configuration parameters.
Optionally, the feature vector includes a main task name of a main task included in the configuration parameters and corresponding to the corresponding subtask.
Optionally, the feature vector includes the type of slave server that is included in the performance parameter to build the corresponding subtask and the number of parallel runs it was building the subtask.
Optionally, the pre-set neural network comprises a long-short term memory artificial neural network based on an attention mechanism.
The embodiment of the invention also provides a device for training a time-consuming prediction neural network model, which comprises the following steps: the acquisition module is suitable for acquiring the configuration parameters of each subtask in each main task, the performance parameters of the slave server for constructing the corresponding subtask and a preset neural network, wherein the configuration parameters and the performance parameters comprise feature vectors related to construction time consumption of the corresponding subtask; a construction module adapted to construct a training sample set based on the configuration parameters and the performance parameters; and the training module is suitable for training the preset neural network by using the training sample set and taking the construction time consumption of the corresponding subtasks as a target, and obtaining a time-consumption prediction neural network model.
The embodiment of the invention also provides a terminal for training the time-consuming prediction neural network model, which comprises a second memory and a second processor, wherein the second memory stores a second computer instruction capable of being executed on the second processor, and the second processor executes the steps of the method for training the time-consuming prediction neural network model provided by the embodiment of the invention when executing the second computer instruction.
The embodiment of the present invention further provides a second storage medium, on which a second computer instruction is stored, and when the second computer instruction runs, the method for training a time-consuming prediction neural network model provided in the embodiment of the present invention is performed.
Compared with the prior art, the technical scheme of the embodiment of the invention has the beneficial effect. For example, in the case of limited slave server resources, the queuing order of the sub-tasks to be constructed in each master task can be dynamically adjusted to reasonably utilize the limited slave server resources and ensure that each master task is delivered quickly and on time.
Drawings
FIG. 1 is a flowchart illustrating a method for scheduling tasks according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an apparatus for scheduling tasks in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a terminal for scheduling tasks in an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a method for training a time-consuming predictive neural network model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an apparatus for training a time-consuming predictive neural network model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a terminal for training a time-consuming predictive neural network model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present invention more comprehensible, specific embodiments accompanied with figures are described in detail below.
Fig. 1 is a flowchart illustrating a method for scheduling tasks according to an embodiment of the present invention.
As shown in fig. 1, a method 100 for scheduling a task according to an embodiment of the present invention may include:
s110, judging whether the queue of the subtask to be constructed in the main task is changed, if so, executing the following steps;
s120, obtaining the construction residual time of each sub-task under construction and each sub-task to be constructed in the main task based on the time-consumption prediction neural network model, and taking the largest one as the construction residual time of the main task;
s130, judging whether the construction residual consumed time of the main task meets the consumed time requirement, if not, executing the step S140;
and S140, adjusting the queuing sequence of the subtasks to be constructed in the main task.
In the embodiment of the present invention, when a project is constructed, each construction task of the project may be referred to as a main task, each main task includes a plurality of projects to be constructed, and each project to be constructed may be referred to as a subtask.
In a specific implementation, a master server and a plurality of slave servers can be configured, wherein the master server is used for distributing the subtasks to each slave server, and the slave servers are used for receiving the subtasks distributed by the master server and constructing the subtasks. The types of slave servers may include physical servers and virtual servers. The construction performance of the physical server to the subtasks is superior to that of the virtual server. For example, the time consumed for building using the physical server may be less than the time consumed for building using the virtual server for the same subtask, and when there are both idle physical servers and virtual servers, the physical server may be used preferentially to build the subtask.
In the embodiment of the invention, the construction of a plurality of main tasks can be triggered simultaneously, and each subtask in the main tasks can be constructed concurrently. However, since the slave server resources cannot be configured infinitely, each slave server can run multiple threads concurrently to build multiple sub-tasks simultaneously when performing project building.
Before constructing a plurality of main tasks triggered simultaneously, each main task and its subtasks may be queued according to a certain level, and each subtask is constructed according to its queuing level.
Specifically, the queuing levels of the main task and the subtasks may be set in advance. For example, the queuing levels of the main task may be set to a VIP level and a non-VIP level, and the queuing levels of the subtasks are set to one, two, three, four, five, six, seven, eight, nine, and ten levels from high to low, with the subtasks at higher levels starting to be constructed earlier.
Before each main task is constructed, the queuing level of each main task and each subtask thereof can be determined according to the specific construction requirement of each main task and each subtask thereof. For example, if a certain primary task is important relative to other primary tasks, the primary task may be determined to be of VIP level before being constructed, and the primary task determined to be of VIP level may be constructed preferentially using the physical server. For another example, if a main task includes multiple sub-tasks, and the expected completion time of each sub-task is different, each sub-task may determine the queue level according to the expected completion time.
In specific implementation, each subtask in each main task can be constructed according to its own queuing level, that is, when a plurality of main tasks are constructed simultaneously, the subtasks of each main task can be constructed concurrently, and it is not necessary to wait for the construction of the subtasks in other main tasks to be completed after all subtasks in the main task with higher level are constructed.
In the specific implementation of step S110, the construction of the main task, which includes one or more sub-tasks being constructed and one or more sub-tasks to be constructed, has been triggered and the resources of the slave servers suitable for constructing each sub-task in the main task are limited. The sub-task being constructed represents a sub-task which has already been constructed by starting from the server, and the sub-task to be constructed represents a sub-task which has not been constructed by starting from the server and is waiting to be constructed in queue.
In the embodiment of the present invention, a plurality may represent two or more.
In a specific implementation of step S110, the change of the queue of the to-be-constructed sub task in the main task may include that there is a sub task enqueue and/or a sub task dequeue in the to-be-constructed sub task in the main task. The sub-task enqueueing indicates that a new sub-task is added in the main task and is waiting to be constructed in a queue, and the sub-task dequeuing indicates that a sub-task in the sub-tasks to be constructed in the main task is allocated to a corresponding slave server to start construction or a construction task in the sub-tasks to be constructed in the main task is cancelled.
Specifically, the new subtask may be a newly added subtask different from the original subtask in the main task, or may be a subtask that is original in the main task and needs to be added again to the build because the build is suspended. The construction is suspended due to the failure of the slave server, or the prior construction of other main tasks with higher queuing level.
In a specific implementation of step S120, the building remaining time of each of the sub-tasks being built and each of the sub-tasks to be built in the main task may be obtained based on the time-consumption prediction neural network model, and the largest one of the building remaining time of the main task is taken as the building remaining time of the main task.
In the embodiment of the present invention, each sub task in the main task may be constructed concurrently. Therefore, the remaining construction time of the main task is determined by the subtask with the largest remaining construction time in each subtask, that is, the remaining construction time of the main task is the remaining construction time of each subtask under construction and each subtask to be constructed.
In a specific implementation, obtaining the build remaining time consumption of each sub-task being built in the main task based on the time consumption prediction neural network model may include:
s121, respectively predicting the construction prediction time consumption of each constructed subtask in the main task through a time consumption prediction neural network model;
s122, acquiring the constructed time consumption of each constructed subtask in the main task;
and S123, respectively taking the difference value between the construction predicted time consumption and the constructed time consumption of each sub task being constructed in the main task as the construction residual time consumption of each sub task being constructed in the main task.
In the specific implementation of step S121, there may be one or more sub-tasks being constructed in the main task, and each sub-task being constructed may predict, through the time-consuming prediction neural network model, the total time consumed for construction, that is, the time-consuming prediction of construction. An implementation manner of predicting the construction prediction time consumption of the subtask being constructed through the time-consumption prediction neural network model will be described in the subsequent section of this document.
In a specific implementation of step S122, the built elapsed time of each sub task being built in the main task may be obtained by a difference between the current time and the start building time of the corresponding sub task being built, respectively.
In a specific implementation of step S123, a difference between the predicted building time (i.e., the total time that each building subtask needs to be built) of each building subtask in the main task and the respective built time may be taken as the remaining building time of each building subtask in the main task.
In an embodiment of the present invention, each of the sub-tasks under construction has a corresponding construction predicted time consumption, constructed time consumption and construction remaining time consumption.
In a specific implementation, obtaining the construction remaining time consumption of each subtask to be constructed in the main task based on the time consumption prediction neural network model may include:
s124, respectively predicting the construction prediction time consumption of each subtask to be constructed in the main task through a time consumption prediction neural network model;
s125, acquiring time consumed by task release of each sub-task under construction in the slave server suitable for constructing each sub-task to be constructed in the main task;
and S126, respectively taking the sum of the construction predicted time consumption and the corresponding task release time consumption of each subtask to be constructed in the main task as the construction residual time consumption of each subtask to be constructed in the main task.
In the specific implementation of step S124, there may be one or more subtasks to be constructed in the main task, and each subtask to be constructed can predict the total time consumption required for construction through the time-consumption prediction neural network model, that is, the time consumption for construction prediction. The implementation manner of predicting the construction prediction time consumption of the subtask to be constructed through the time consumption prediction neural network model will be described in the subsequent section of this document.
In a specific implementation of step S125, each of the sub-tasks being built in the slave server may include the sub-task in the master task and may also include the sub-tasks in other master tasks, and the time consumed for releasing the task from each of the sub-tasks being built in the slave server is the time consumed for the remaining building of the corresponding sub-task being built.
In the specific implementation of step S126, the remaining construction time of each to-be-constructed subtask in the main task is the time elapsed from the current time to the time when the corresponding to-be-constructed subtask completes construction.
In the embodiment of the present invention, the subtasks to be built in the main task need to wait for the slave servers suitable for building these subtasks to release the sub tasks being built before building the subtasks (due to the limited resource of the slave servers, the slave servers need to wait for one sub task being built to complete building before building the next sub task). Therefore, the construction residual time of each to-be-constructed subtask in the main task can be respectively the sum of the construction predicted time (i.e., the total time spent by each to-be-constructed subtask in the main task in constructing) of each to-be-constructed subtask and the corresponding task release time as the construction residual time of each to-be-constructed subtask.
For example, the queue of the sub-task to be built of the master task includes the first sub-task to be built located first in the queue, and the sub-task being built of the slave server adapted to build the first sub-task to be built includes the first sub-task being built which has the shortest remaining time consumption, so that the slave server is adapted to start building the first sub-task to be built after the first sub-task being built completes the building. Thus, the remaining build time of the first to-be-built sub-task is the sum of the remaining build time of the first building sub-task and the build predicted time consumption of the first to-be-built sub-task (i.e., the total time consumption that the first to-be-built sub-task needs to take to build).
In the embodiment of the invention, each subtask to be constructed has corresponding construction prediction time consumption, task release time consumption and construction residual time consumption.
In a specific implementation of step S130, determining whether the remaining construction time of the main task meets the time consumption requirement may include:
s131, judging whether the construction residual time of the main task is less than or equal to the construction available time of the main task.
In a specific implementation of step S131, the construction available time for the main task may be a difference between the expected completion time of the main task and the current time.
In a specific implementation of step S140, adjusting the queuing order of the sub-tasks to be constructed in the main task may include:
s141, judging whether the queuing level of each subtask to be constructed in the main task reaches the highest level, if not, increasing the queuing level of the corresponding subtask to be constructed in the main task, and returning to the step S120.
In the specific implementation of step S141, if the queuing level of the to-be-constructed sub task in the main task does not reach the highest level, the queuing level of the corresponding to-be-constructed sub task is increased. When the queuing level of the to-be-constructed sub task in the main task is increased, the queue of the to-be-constructed sub task in the main task is changed again, and at this time, the step S120 may be executed again.
In a specific implementation, the increasing the queuing level of the corresponding sub-task to be constructed may include: increasing the queuing level of the subtask to be constructed with the largest residual time consumption in the subtasks to be constructed which do not reach the highest level; and/or increasing the queuing level of the subtasks to be constructed, which have residual construction time consumption larger than the construction available time consumption of the main task, in the subtasks to be constructed, and which do not reach the highest level; and/or increasing the queuing level of all sub-tasks to be built that do not reach the highest level. For example, the queuing level of the corresponding sub-task to be built may be increased by one or more levels.
In the specific implementation of step S141, when the queuing level of each to-be-constructed subtask in the main task reaches the highest level, the method may further include:
s142, the sub task being constructed in the main task of which the level is lower than that of the main task is terminated.
In the specific implementation of step S142, although the queuing level of each sub-task to be built in the main task has reached the highest level, the remaining time consumed for building the main task does not meet the time consumption requirement, at this time, the sub-tasks being built in other main tasks lower than the level of the main task may be suspended to release the slave server resources, so that one or more of the sub-tasks to be built in the main task may start to perform building. For example, when the main task is of a VIP level, the sub-tasks being constructed in the main task of a non-VIP level may be suspended.
In a specific implementation, the sub-task under construction in the suspended other main tasks can rejoin the queue of the sub-task to be constructed of the corresponding main task and wait for queuing construction.
The embodiment of the invention also provides a device for scheduling the tasks.
Fig. 2 is a schematic diagram of an apparatus for scheduling tasks in an embodiment of the present invention.
As shown in fig. 2, an apparatus 200 for scheduling tasks according to an embodiment of the present invention may include a determining module 210, a calculating module 220 connected to the determining module 210, and an adjusting module 230 connected to the calculating module 220.
Specifically, the determining module 210 is adapted to determine whether a queue of a sub task to be constructed in the main task changes and whether the construction remaining time of the main task meets a time consumption requirement, where the change includes that the sub task to be constructed is listed and/or the sub task is listed.
In a specific implementation, the determining module 210 may include a first determining unit, a second determining unit, and a third determining unit connected to the second determining unit. Specifically, the first judging unit is adapted to acquire a queue of a sub task to be constructed in the main task from the main server, and judge whether the queue changes. The second judging unit is suitable for judging whether the construction residual time of the main task is less than or equal to the construction available time of the main task, wherein the available time of the main task is the difference value between the expected completion time of the main task and the current time. The third judging unit is suitable for judging whether the queuing level of each subtask to be built in the main task reaches the highest level when the construction residual time consumption of the main task is less than or equal to the construction available time consumption of the main task.
Specifically, the calculating module 220 is adapted to obtain the construction remaining time of each currently-constructed subtask and each subtask to be constructed in the main task based on the time-consumption prediction neural network model when the queue of the subtask to be constructed in the main task changes, and take the largest one of the currently-constructed subtasks and the construction remaining time of each subtask to be constructed in the main task as the construction remaining time of the main task.
In particular implementations, the calculation module 220 may include a prediction unit, an acquisition unit, and a calculation unit connected to the prediction unit and the acquisition unit, respectively. Specifically, the prediction unit is adapted to predict, through the time-consuming prediction neural network model, the construction prediction time consumption of each of the subtasks under construction and each of the subtasks to be constructed in the main task, respectively. The obtaining unit is suitable for obtaining the constructed time consumption of each constructed subtask in the main task and the task release time consumption of each constructed subtask in the slave server suitable for constructing each to-be-constructed subtask in the main task. The calculation unit is suitable for respectively taking the difference value of the construction prediction time consumption and the constructed time consumption of each sub task under construction in the main task as the construction residual time consumption of each sub task under construction in the main task, respectively taking the construction prediction time consumption and the corresponding task release time consumption of each sub task under construction in the main task as the construction residual time consumption of each sub task under construction in the main task, and respectively taking the construction residual time consumption of the largest constructed residual time consumption in each sub task under construction and each sub task under construction in the main task as the construction residual time consumption of the main task.
In particular, the adjusting module 230 is adapted to adjust the queuing order of the subtasks to be built in the main task when the remaining building time of the main task does not meet the time consumption requirement.
In a specific implementation, the adjusting module 230 is adapted to increase the queuing order of the corresponding sub-tasks to be built to the highest level when the queuing level of the sub-tasks to be built in the main task does not reach the highest level. Specifically, the adjusting module 230 is adapted to increase the queuing level of the subtask to be constructed, which has the largest remaining time consumption, in the subtasks to be constructed, and which does not reach the highest level; and/or increasing the queuing level of the subtasks to be constructed, which have residual construction time consumption larger than the construction available time consumption of the main task, in the subtasks to be constructed, and which do not reach the highest level; and/or increasing the queuing level of all sub-tasks to be built that do not reach the highest level. For example, the adjustment module 230 may increase the queuing level of the corresponding sub-task to be built by one or more levels.
In a specific implementation, the adjusting module 230 is further adapted to suspend the sub-task being built in other main tasks lower than the queuing level of the main task when the queuing level of each sub-task to be built in the main task reaches the highest level.
In a specific implementation, the task scheduling apparatus 200 may further include a clock respectively connected to the determining module 210 and the calculating module 220, and adapted to collect the current time in the main task building process.
The embodiment of the invention also provides a terminal for scheduling tasks.
Fig. 3 is a schematic diagram of a terminal for scheduling tasks in the embodiment of the present invention.
As shown in fig. 3, a terminal 300 for scheduling tasks according to an embodiment of the present invention may include a master server 310 and a slave server 320 connected to the master server 310.
Specifically, the master server 310 is used to assign the sub-tasks to be constructed in the master task to the slave server 320. From the server 320. To receive the subtasks to be built distributed by the main server 310 and to perform the building thereon.
In a particular implementation, the primary server 310 may include a first memory and a first processor. The first memory has stored thereon first computer instructions executable on the first processor, and the first processor executes the first computer instructions to perform the steps of the method 100 for scheduling tasks according to the embodiment of the present invention.
In a specific implementation, the slave server 320 is provided in plurality to build a plurality of subtasks in the master task concurrently. However, the slave servers 320 cannot be infinitely set, and the number of the slave servers 320 is generally less than the number of all the subtasks to be constructed. Therefore, resources can be saved, and the operation cost can be reduced.
In a specific implementation, the terminal 300 for scheduling tasks may further include a clock adapted to collect the current time in the main task building process.
The embodiment of the present invention further provides a first storage medium, on which a first computer instruction is stored, where the first computer instruction executes the steps of the method for scheduling tasks provided in the embodiment of the present invention when running.
The embodiment of the invention also provides a method for training the time-consuming prediction neural network model.
FIG. 4 is a flowchart illustrating a method for training a time-consuming predictive neural network model according to an embodiment of the present invention.
As shown in fig. 4, a method 400 for training a time-consuming predictive neural network model according to an embodiment of the present invention may include:
s410, acquiring configuration parameters of each subtask in each main task and performance parameters of a slave server for constructing the corresponding subtask, wherein the configuration parameters and the performance parameters comprise feature vectors related to construction time consumption of the corresponding subtask;
s420, constructing a training sample set based on the configuration parameters and the performance parameters;
and S430, training the preset neural network by using the training sample set and taking the construction time consumption of the corresponding subtasks as a target, and obtaining a time consumption prediction neural network model.
In the embodiment of the present invention, when a project is constructed, each construction task of the project may be referred to as a main task, each main task includes a plurality of projects to be constructed, and each project to be constructed may be referred to as a subtask.
In a particular implementation, each subtask may be divided into multiple files that need to be built. For example, under an android platform, one subtask may include files such as idh, rls _ image _ idh, ota, fullcode _ pac, and if the construction requirements of the files are all "TRUE", it indicates that the files need to be constructed. The construction of each file takes a certain amount of time. In particular, the build time of each file may be related to at least the file name of the file. The file name of each file determines its type, content, data, and other information. Files with different file names have different build times, and files with the same file name in different main tasks have different build times, which are related to the code compiling time of the files or the data packaged into the files.
In the specific implementation of step S410, first, a database containing historical construction data of the main task may be constructed, and then, the configuration parameters of each sub task in each main task and the performance parameters of the slave servers used to construct the corresponding sub tasks are obtained from the database.
The historical construction data of the main task may include historical configuration parameters of each sub task in the main task and historical performance parameters of the slave servers used for constructing the corresponding sub task, and the historical configuration parameters and the historical performance parameters each include a feature vector related to construction time consumption of the corresponding sub task.
In a specific implementation, the configuration parameter of each sub-task in each main task may include a file name of each file that needs to be constructed in the sub-task.
In a specific implementation, the configuration parameter of each subtask in the respective main task may further include a subtask name of the subtask. The sub-task name of each sub-task is related to at least the file name of the respective file it comprises that needs to be built.
In a specific implementation, the configuration parameter of each sub task in each main task may further include a main task name of the main task corresponding to the sub task. The main task name of each main task is related to at least the sub-task names of the respective sub-tasks it comprises.
In the embodiment of the invention, as long as the files in each subtask in the main task are constructed, the total construction time and the remaining construction time of the main task are affected.
In the specific implementation of step S410, the performance parameters of the slave server used to construct the corresponding subtask may include the type of the slave server and the number of parallel processes when the subtask is being constructed.
In particular, the types of slave servers may include physical servers and virtual servers. The construction performance of the physical server to the subtasks is superior to that of the virtual server. For example, for the same subtask, the time consumed for building using a physical server is less than the time consumed for building using a virtual server. When the idle physical server and the virtual server exist at the same time, the physical server can be preferentially used for constructing the subtasks.
When a slave server is used to construct a certain subtask, the number of all the subtasks being constructed in the slave server, i.e. the number of parallel processes in the slave server, also has an influence on the construction time consumption of the subtask. For example, when a slave server is used to construct a sub-task, the more parallel processes in the slave server, the longer it takes to construct the sub-task.
In the embodiment of the invention, the configuration parameters of each subtask in each main task and the performance parameters of the slave server for constructing the corresponding subtask each comprise a feature vector related to the construction time consumption of the corresponding subtask.
Specifically, the feature vector may include the file name of the file that needs to be built in each subtask included in the configuration parameters, and the type of the slave server included in the performance parameters that builds the corresponding subtask and the number of parallel processes when it is building the subtask.
Further, the feature vector may also include the sub-task name of each sub-task included in the configuration parameters.
Further, the feature vector may further include a main task name of the main task included in the configuration parameter and corresponding to the corresponding subtask.
In a specific implementation of step S420, a training sample set may be constructed based on configuration parameters and performance parameters including feature vectors associated with construction time-consuming of corresponding subtasks, so as to be used for training of the neural network.
In the specific implementation of step S430, the preset neural network may be trained by using the training sample set to target the construction time consumption of the corresponding subtask (i.e., the time consumption required for constructing the entire subtask), and obtain a time consumption prediction neural network model.
In particular implementations, the pre-set neural network may include a Long Short-Term Memory artificial neural network (LSTM) based on an attention mechanism. When the long-short term memory artificial neural network is trained, the training is carried out by adopting the conventional technical means in the field.
In the embodiment of the invention, when the time consumption for constructing the subtask or the subtask to be constructed is predicted by the time consumption prediction neural network model, the method can be realized by the following steps:
acquiring configuration parameters of the subtask being constructed or to be constructed and performance parameters of the slave server for constructing the corresponding subtask;
extracting a characteristic vector related to the construction time consumption of the subtask being constructed or to be constructed from the configuration parameter and the performance parameter;
and inputting the related feature vectors into a time-consuming prediction neural network model, and obtaining the time-consuming prediction of the construction of the subtask being constructed or the subtask to be constructed.
The embodiment of the invention also provides a device for training the time-consuming prediction neural network model.
FIG. 5 is a schematic diagram of an apparatus for training a time-consuming predictive neural network model according to an embodiment of the present invention.
As shown in fig. 5, an apparatus 500 for training a time-consuming predictive neural network model according to an embodiment of the present invention may include an obtaining module 510, a constructing module 520 connected to the obtaining module 510, and a training module 530 connected to the constructing module 520.
Specifically, the obtaining module 510 is adapted to obtain a configuration parameter of each sub-task in each main task, a performance parameter of the slave server for constructing the corresponding sub-task, and a preset neural network, wherein the configuration parameter and the performance parameter each include a feature vector related to construction time consumption of the corresponding sub-task. The construction module 520 is adapted to construct a training sample set based on the configuration parameters and the performance parameters. The training module 530 is adapted to obtain a preset neural network, and train the preset neural network by using the training sample set and targeting the construction time consumption of the corresponding subtask to obtain a time consumption prediction neural network model.
The embodiment of the invention also provides a terminal for training the time-consuming prediction neural network model.
FIG. 6 is a schematic diagram of a terminal for training a time-consuming predictive neural network model according to an embodiment of the present invention.
As shown in fig. 6, a terminal 600 for training a time-consuming predictive neural network model according to an embodiment of the present invention may include a second memory 610 and a second processor 620 connected to the second memory 610. The second memory 610 stores second computer instructions executable on the second processor 620, and the second processor 620 executes the steps of the method for training the time-consuming predictive neural network model according to the embodiment of the present invention when executing the second computer instructions.
The embodiment of the present invention further provides a second storage medium, on which a second computer instruction is stored, and when the second computer instruction runs, the method for training a time-consuming prediction neural network model provided in the embodiment of the present invention is performed.
By adopting the technical scheme provided by the embodiment of the invention, the queuing sequence of the subtasks to be constructed in each main task can be dynamically adjusted under the condition that the resources of the slave server are limited, so that the limited resources of the slave server are reasonably utilized, and the main tasks are ensured to be delivered quickly and on time.
In specific implementation, the technical scheme provided by the embodiment of the invention can be applied to projects constructed by Jenkins and other construction projects with limited construction resources.
While specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. The characteristic examples provided in the present disclosure are intended to be illustrative, not limiting, unless stated differently. In particular implementations, the features of one or more dependent claims may be combined with those of the independent claims as technically feasible according to the actual requirements, and the features from the respective independent claims may be combined in any appropriate manner and not merely by the specific combinations enumerated in the claims.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A method (100) of scheduling tasks, comprising:
s110, judging whether a queue of a subtask to be constructed in a main task is changed, if so, executing the following steps, wherein the change comprises that the subtask to be constructed is listed and/or listed;
s120, obtaining the construction residual time of each sub task under construction and each sub task to be constructed in the main task based on a time-consuming prediction neural network model, and taking the largest one as the construction residual time of the main task;
s130, judging whether the construction residual time consumption of the main task meets the time consumption requirement or not, and if not, adjusting the queuing sequence of the subtasks to be constructed in the main task.
2. The method (100) of claim 1, wherein obtaining build remaining time consumption for each of the building subtasks in the main task based on a time consumption prediction neural network model comprises:
respectively predicting the construction prediction time consumption of each constructed subtask in the main task through the time consumption prediction neural network model;
acquiring the constructed time consumption of each constructed subtask in the main task;
and respectively taking the difference value of the construction predicted time consumption and the constructed time consumption of each sub task being constructed in the main task as the construction residual time consumption of each sub task being constructed in the main task.
3. The method (100) according to claim 1, wherein obtaining the build remaining time consumption of each subtask to be built in the main task based on a time consumption prediction neural network model comprises:
respectively predicting the construction prediction time consumption of each subtask to be constructed in the main task through the time consumption prediction neural network model;
acquiring task release time of each sub task being constructed in the slave server suitable for constructing each sub task to be constructed in the main task;
and respectively taking the sum of the construction prediction time and the corresponding task release time of each sub task to be constructed in the main task as the construction residual time of each sub task to be constructed in the main task.
4. The method (100) according to claim 1, wherein said step S130 comprises:
judging whether the construction residual time consumption of the main task is less than or equal to the construction available time consumption of the main task, if not, judging whether the queuing level of each subtask to be constructed in the main task reaches the highest level, if not, improving the queuing level of the corresponding subtask to be constructed, and returning to the step S120;
wherein the available elapsed time of the primary task is a difference between the desired completion time of the primary task and a current time.
5. The method (100) of claim 4, wherein if the highest level is reached, a building subtask in a main task that is lower than a queuing level of the main task is aborted.
6. The method (100) according to claim 4, wherein said increasing the queuing level of the respective subtasks to be built comprises:
increasing the queuing level of the subtask to be constructed with the largest residual time consumption in the subtasks to be constructed which do not reach the highest level; and/or
Increasing the queuing level of the subtasks to be constructed, which have residual construction time consumption larger than the construction available time consumption of the main task, in the subtasks to be constructed, and which do not reach the highest level; and/or
And increasing the queuing level of all the subtasks to be constructed which do not reach the highest level.
7. An apparatus (200) for scheduling tasks, comprising:
the system comprises a judging module (210) and a setting module, wherein the judging module is suitable for judging whether a queue of a subtask to be constructed in a main task is changed and whether construction residual time consumption of the main task meets a time consumption requirement, and the change comprises that the subtask is listed and/or listed in the subtask to be constructed;
a calculation module (220) adapted to obtain the construction remaining time of each of the sub-tasks under construction and each of the sub-tasks to be constructed in the main task based on a time-consuming prediction neural network model when the queue changes, and to take the largest one as the construction remaining time of the main task;
the adjusting module (230) is suitable for acquiring the time consumption requirement of the main task and adjusting the queuing sequence of the subtasks to be constructed in the main task when the construction residual time of the main task does not meet the time consumption requirement.
8. The apparatus (200) of claim 7, wherein said computing module (220) comprises:
the prediction unit is suitable for respectively predicting the construction prediction time consumption of each constructed subtask and each subtask to be constructed in the main task through the time consumption prediction neural network model;
the acquiring unit is suitable for acquiring the constructed time consumption of each constructed subtask in the main task and the task release time consumption of each constructed subtask in the slave server suitable for constructing each to-be-constructed subtask in the main task;
the computing unit is suitable for respectively taking a difference value between the construction predicted time consumption and the constructed time consumption of each sub task under construction in the main task as the construction residual time consumption of each sub task under construction in the main task, respectively taking a sum of the construction predicted time consumption and the corresponding task release time consumption of each sub task under construction in the main task as the construction residual time consumption of each sub task under construction in the main task, and respectively taking the construction residual time consumption of the largest constructed time consumption in each sub task under construction and each sub task under construction in the main task as the construction residual time consumption of the main task.
9. The apparatus (200) of claim 7, wherein said determining means (210) comprises:
the first judging unit is suitable for acquiring a queue of a subtask to be constructed in the main task and judging whether the queue changes, wherein the change comprises the enqueuing and/or the dequeuing of the subtask in the subtask to be constructed;
a second judging unit, adapted to judge whether a build remaining time of the main task is less than or equal to a build available time of the main task, wherein the available time of the main task is a difference between an expected completion time of the main task and a current time;
and the third judging unit is suitable for judging whether the queuing level of each subtask to be constructed in the main task reaches the highest level or not when the construction residual time consumption of the main task is less than or equal to the construction available time consumption of the main task.
10. The apparatus (200) according to claim 9, wherein the adjusting module (230) is adapted to increase the queuing level of the respective sub-task to be built if the queuing level of the sub-task to be built in the main task does not reach the highest level.
11. The apparatus (200) according to claim 9, wherein said adjusting module (230) is adapted to abort a sub-task being built in a main task lower than the queuing level of said main task when the queuing level of each sub-task to be built in said main task reaches said highest level.
12. A terminal (300) for scheduling tasks, comprising:
the master server (310) is used for distributing the subtasks to be constructed in the master task to the slave server (320); a slave server (320) for receiving the subtask to be built allocated by the master server (310) and performing the building thereon;
wherein the primary server (310) comprises a first memory having stored thereon first computer instructions executable on the first processor, the first processor when executing the first computer instructions performing the steps of the method of any one of claims 1 to 6.
13. A first storage medium having first computer instructions stored thereon for execution by a first processor to perform the steps of the method of any of claims 1 to 6.
CN202011050166.6A 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model Active CN112130979B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210669734.3A CN115061794A (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model
CN202011050166.6A CN112130979B (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011050166.6A CN112130979B (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210669734.3A Division CN115061794A (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model

Publications (2)

Publication Number Publication Date
CN112130979A CN112130979A (en) 2020-12-25
CN112130979B true CN112130979B (en) 2022-08-09

Family

ID=73844730

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011050166.6A Active CN112130979B (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model
CN202210669734.3A Pending CN115061794A (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210669734.3A Pending CN115061794A (en) 2020-09-29 2020-09-29 Method, device, terminal and medium for scheduling task and training neural network model

Country Status (1)

Country Link
CN (2) CN112130979B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553187A (en) * 2021-07-30 2021-10-26 咪咕文化科技有限公司 Method and device for concurrently constructing revops and computing equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180012600A (en) * 2016-07-27 2018-02-06 엘에스산전 주식회사 Apparatus for monitoring and controlling
CN111338791A (en) * 2020-02-12 2020-06-26 平安科技(深圳)有限公司 Method, device and equipment for scheduling cluster queue resources and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117260B (en) * 2018-08-30 2021-01-01 百度在线网络技术(北京)有限公司 Task scheduling method, device, equipment and medium
CN109784647B (en) * 2018-12-14 2023-04-18 兰州空间技术物理研究所 Task scheduling method for active potential control system of space station

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180012600A (en) * 2016-07-27 2018-02-06 엘에스산전 주식회사 Apparatus for monitoring and controlling
CN111338791A (en) * 2020-02-12 2020-06-26 平安科技(深圳)有限公司 Method, device and equipment for scheduling cluster queue resources and storage medium

Also Published As

Publication number Publication date
CN115061794A (en) 2022-09-16
CN112130979A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109582433B (en) Resource scheduling method and device, cloud computing system and storage medium
CN107885595B (en) Resource allocation method, related equipment and system
CN110389816B (en) Method, apparatus and computer readable medium for resource scheduling
CN109561148A (en) Distributed task dispatching method in edge calculations network based on directed acyclic graph
CN111026553B (en) Resource scheduling method and server system for offline mixed part operation
US20240036937A1 (en) Workload placement for virtual gpu enabled systems
US11311722B2 (en) Cross-platform workload processing
CN109947532B (en) Big data task scheduling method in education cloud platform
CN112486642B (en) Resource scheduling method, device, electronic equipment and computer readable storage medium
CN111190691A (en) Automatic migration method, system, device and storage medium suitable for virtual machine
CN114371926B (en) Refined resource allocation method and device, electronic equipment and medium
WO2020121292A1 (en) Efficient data processing in a serverless environment
CN112130979B (en) Method, device, terminal and medium for scheduling task and training neural network model
WO2018235739A1 (en) Information processing system and resource allocation method
CN102184124B (en) Task scheduling method and system
CN113485833A (en) Resource prediction method and device
CN109189581B (en) Job scheduling method and device
CN116302448B (en) Task scheduling method and system
CN117331668A (en) Job scheduling method, device, equipment and storage medium
CN113535346B (en) Method, device, equipment and computer storage medium for adjusting thread number
CN114697213A (en) Upgrading method and device
JP5045576B2 (en) Multiprocessor system and program execution method
JP5417626B2 (en) Management computer, job scheduling method and job scheduling program
CN112596879A (en) Method for quantum cloud computing platform task scheduling
CN116661962B (en) Data analysis method based on cloud computing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant