CN112035230B - Task scheduling file generation method, device and storage medium - Google Patents

Task scheduling file generation method, device and storage medium Download PDF

Info

Publication number
CN112035230B
CN112035230B CN202010902677.XA CN202010902677A CN112035230B CN 112035230 B CN112035230 B CN 112035230B CN 202010902677 A CN202010902677 A CN 202010902677A CN 112035230 B CN112035230 B CN 112035230B
Authority
CN
China
Prior art keywords
task
scheduling
file
subtasks
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010902677.XA
Other languages
Chinese (zh)
Other versions
CN112035230A (en
Inventor
殷昊
尉迟美格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010902677.XA priority Critical patent/CN112035230B/en
Publication of CN112035230A publication Critical patent/CN112035230A/en
Application granted granted Critical
Publication of CN112035230B publication Critical patent/CN112035230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the specification provides a task scheduling file generation method, a device and a storage medium, wherein the method comprises the following steps: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; the task scheduling files comprise task scheduling logic relations, so that automatic identification of task flow files is realized, and task scheduling processing efficiency is improved.

Description

Task scheduling file generation method, device and storage medium
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a task scheduling file generation method, a task scheduling file generation device and a storage medium.
Background
Currently, with the development of computer technology, many of the current banking data can be processed by a core banking system. The core banking system (Core Banking System) is an electronic system that handles accounting with customers as the center, meets the comprehensive teller system, and provides 24 hours of service. The core banking system needs to schedule multiple jobs when processing batch jobs.
TWS (Tivoli Workload Scheduler) tool is a batch job operation scheduling tool based on the IBM host, and performs the scheduling operation of the job according to the set job operation logic relationship. In the existing work, batch operation logic relation files in the version are read in a manual mode, and then the operation modules are processed, edited, added and deleted in the TWS tool to carry out manual maintenance of operation by operation.
In a core banking business system, with the continuous development of business, the system functions are continuously expanded and perfected, the operation of batch process operation at night is continuously increased, the core system has up to tens of thousands of batch operations at present, and each time a new version is put into production, the upgrading work related to the batch operation scheduling relationship is more and more complicated.
With the continuous increase of the number of the operations and the increase of the concurrency split, the complexity of the logic relation graph is increased in geometric multiple, the efficiency of the manual reading and the manual editing operation adding and deleting mode in the TWS tool is lower and lower, and meanwhile, the error risk during manual editing is also greatly increased.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a method, an apparatus, and a storage medium for generating a task scheduling file, so as to automatically identify a task flow file and improve efficiency of task scheduling processing.
In order to solve the above problem, an embodiment of the present disclosure provides a task scheduling file generating method, including: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
In order to solve the above problem, an embodiment of the present disclosure further provides a task scheduling file generating apparatus, including: the acquisition module is used for acquiring the task flow file; the reading module is used for reading the execution sequence of each task and the task name of each task from the task flow file; the determining module is used for determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; the generating module is used for generating a task scheduling file based on a scheduling strategy corresponding to each task and the execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
To solve the above problem, embodiments of the present disclosure further provide an electronic device, including: a memory for storing a computer program; a processor for executing the computer program to implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
To solve the above problems, the embodiments of the present specification further provide a computer-readable storage medium having stored thereon computer instructions that, when executed, implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
As can be seen from the technical solutions provided in the embodiments of the present specification, a task flow file may be obtained; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; the task scheduling files comprise task scheduling logic relations, so that automatic identification of task flow files is realized, and task scheduling processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a task scheduling file generating method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a task flow file according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a task flow file according to an embodiment of the present disclosure;
fig. 4 is a schematic functional structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a functional schematic diagram of a task scheduling file generating device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions of the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In this embodiment of the present disclosure, the batch may include a bank backend system running a series of jobs at night to check, count, generate reports, synchronize inter-system data, etc. transaction information, customer information, account information on the same day (or on the same month or on the same season). The job may be a step in a batch process, may be a function point, or may be a small set of function points. Each job in the batch is required to be processed in the batch processing, wherein each job in the batch processing corresponds to a task, and when all the tasks are processed, the completion of the batch processing is indicated.
In the present embodiment, the IBM host is a mainframe of IBM corporation running the ZOS system. The TWS tool is a batch job operation scheduling tool based on the IBM host, and performs job scheduling operation according to the set job operation logic relation.
In a core banking business system, with the continuous development of business, the system functions are continuously expanded and perfected, the operation of night batch process is continuously increased, a plurality of operations are required to be scheduled when batch operation is processed, and limited service resources are utilized as much as possible to carry out batch processing. Typically, a worker will write a task flow file that includes a task name, a task execution order, etc., but the TWS tool cannot directly identify the task flow file, and therefore, converts the contents in the task flow file into a file that can be identified by the TWS tool. In the existing work, the task flow files in the version are read manually, and then the operation modules are processed, edited and added and deleted in the TWS tool to carry out manual maintenance of operation by operation. However, with the increasing number of tasks and the increasing of concurrency splitting, the complexity of task flow files is increased in geometric multiple, the efficiency of the manual reading and the manual editing operation adding and deleting mode in the TWS tool is lower and lower, and meanwhile, the error risk in manual editing is also greatly increased. Considering that if the task flow file of the abstract operation logic relationship in the program version is processed and parsed into the text which can be recognized by the TWS tool by utilizing an automatic mode, a great deal of manual work is hopefully reduced for editing the task relationship in the TWS tool, and the efficiency is improved and the risk of manual misoperation is reduced.
Please refer to fig. 1. The embodiment of the description provides a task scheduling file generation method. In the embodiment of the present specification, the main body that performs the task scheduling file generating method may be an electronic device having a logical operation function, and the electronic device may be a server. The server may be an electronic device with a certain arithmetic processing capability. Which may have a network communication unit, a processor, a memory, etc. Of course, the server is not limited to the electronic device with a certain entity, and may be software running in the electronic device. The server may also be a distributed server, and may be a system having a plurality of processors, memories, network communication modules, etc. operating in concert. Alternatively, the server may be a server cluster formed for several servers. The method may comprise the following steps.
S110: and acquiring a task flow file.
In some embodiments, the task flow file may be a pre-written file, and the task flow file may include information such as task names of a plurality of tasks, execution sequences of a plurality of tasks, and the like.
In some embodiments, as shown in FIG. 2, the task flow file may be a file edited by VISIO.
In some embodiments, the user may upload the task flow file in the server. The server may receive the task flow file uploaded by the user. For example, the server may provide an interactive interface to the user, where the user may upload the task flow file. The server may receive the task flow file uploaded by the user. Alternatively, the user may upload the task flow file in the client. The client can receive the task flow file uploaded by the user and send the task flow file to the server. The server may receive the task flow file. For example, the client may provide an interactive interface to the user, where the user may upload the task flow file. The client can receive the task flow file uploaded by the user and send the task flow file to the server. The client may be, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The client may be capable of communicating with the server, such as via a wired network and/or a wireless network.
S120: and reading the execution sequence of each task and the task name of each task from the task flow file.
In some embodiments, the server may read the execution order of each task and the task name of each task from the task flow file. Specifically, the server may read elements in the task flow file edited by the VISIO using a java program, and then process the read result to obtain the execution sequence of each task and the task name of each task.
S130: determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling policy characterizes the execution mode of the task.
In some embodiments, the scheduling policy characterizes how the task is executed, for example, how the task is scheduled and how the task is executed.
In some embodiments, the task name includes a schedule identification; the scheduling identifier is used for identifying a scheduling strategy; correspondingly, determining a scheduling strategy corresponding to the task according to the scheduling identifier. Specifically, for each task in the task flow file, the task name may be composed of a task identifier and a schedule identifier. The task identifier plays a role of identifying tasks, and the scheduling identifier can be used for identifying a scheduling policy. For example, for a task with a task name of "AB100000", the first four bits "AB10" may be a task identifier, and the second four bits "0000" may be a scheduling identifier, where the task identifiers of different tasks are different, and the scheduling identifiers of different tasks may be the same or different. For example, for a task with a task name of "AB100000" and a task with a task name of "AB200000", the task and the task are different tasks, but the scheduling identifiers of the task and the task are the same, which may indicate that the task and the task have the same scheduling policy; for the task with the task name "AB100000" and the task with the task name "AB20XX00", both are different tasks, and both have different scheduling policies. Of course, the foregoing description is given by way of example only, and the task identifier and the scheduling identifier in the task name may be other numbers, letters, symbols, or combinations of numbers, letters, and symbols, which are not limited in this embodiment of the present disclosure.
In some embodiments, the scheduling policy may include scheduling a next task after completing the task execution; splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed. For example, for a task with a scheduling identifier of "0000", the corresponding scheduling policy may be to execute the next task after the task is scheduled to be executed; for a scheduling identifier not being "0000", for example, the scheduling identifier is "00 x", "XX00", "XX x", etc., the corresponding scheduling policy may be to split the task into a plurality of subtasks, and execute the next task after the execution of the plurality of subtasks is completed. Of course, the correspondence between the scheduling identifier and the scheduling policy is not limited to the above examples, and other modifications may be made by those skilled in the art in light of the technical spirit of the present application, but all the functions and effects implemented by the present application are the same or similar to those of the present application, and are included in the protection scope of the present application.
In some embodiments, for banking enterprises, which may generally include headquarters and nationwide provinces, the headquarters and provincial data is stored in different partitions of the database. For some tasks, the method can be directly executed, and the next task can be executed after the execution is finished; however, for some tasks, the data of the tasks are stored in different partitions of the database, in order to improve the execution efficiency of the tasks, the tasks can be split into a plurality of subtasks, and after the execution of each subtask is completed, the next task can be executed; in order to improve the resource utilization efficiency of the server, the tasks can be divided into a plurality of subtasks, the subtasks are executed in parallel, and the next task is executed after the subtasks are executed; for more complex tasks or tasks with larger data volume, after the task is split into a plurality of subtasks, the subtasks are further split again, and after the split plurality of subtasks are executed, the next task is executed.
In some embodiments, splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed may include: splitting a task into a plurality of subtasks with preset quantity, and executing the next task after the subtasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database. Specifically, the database may include a plurality of different partitions, the preset number may be the number of the partitions in the database, the different partitions in the database have partition identifiers, each partition identifier uniquely identifies one partition, and the partition identifier may be composed of two digits. For a task with a task name of 'AB 20XX 00', the server can determine a scheduling strategy corresponding to the task to split the task into a plurality of subtasks with preset numbers according to a scheduling identifier of 'XX 00', and execute the next task after the plurality of subtasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database. For example, if the partition identification is 01-15, and 15 partitions are identified in total, the task may be split into 15 subtasks, each of which is bound to a different partition identification. Specifically, the partition identifier may be added to the task name to obtain the task name of the subtask, for example, the partition identifier may be substituted for "XX" in the scheduling identifier to obtain the task names "AB200100, AB200200 … … AB201500" of the subtask, and then the subtasks corresponding to each partition are processed according to the partition identifier in the task name of the subtask, and after the execution of the plurality of subtasks is completed, the next task is executed.
In some embodiments, splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed may include: presetting the parallelism of task execution; the parallelism characterizes the maximum number of tasks to be executed in parallel; splitting a task into a plurality of subtasks; the number of the subtasks is equal to the parallelism; the plurality of subtasks are executed in parallel. Specifically, for a task with a task name of "AB 3000", the server may determine, according to the scheduling identifier "00", that a scheduling policy corresponding to the task is a parallelism of task execution set in advance; the parallelism characterizes the maximum number of tasks to be executed in parallel; splitting a task into a plurality of subtasks; the number of the subtasks is equal to the parallelism; the plurality of subtasks are executed in parallel. For example, if the parallelism of task execution set in advance is 8, the task may be split into 8 subtasks, the task names of the subtasks may be set according to the parallelism, for example, the task names "AB300001, AB300002 … … AB300008" of the subtasks may be modified in the schedule identifier, and the plurality of subtasks are executed in parallel, and after the execution of the plurality of subtasks is completed, the next task is executed.
In some embodiments, splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed may further include: splitting the task into a plurality of subtasks of a preset number; wherein the data of the plurality of subtasks are stored in different partitions in the database; splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; the number of the tasks obtained after splitting each subtask is equal to the parallelism. Specifically, for a task with a task name of "AB40 XX", the server may determine, according to the scheduling identifier "XX", that a scheduling policy corresponding to the task is to split the task into a preset number of multiple subtasks; wherein the data of the plurality of subtasks are stored in different partitions in the database; splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; the number of the tasks obtained after splitting each subtask is equal to the parallelism. For example, if the partition identification is 01-15, and 15 partitions are identified in total, the task may be split into 15 subtasks, each of which is bound to a different partition identification. Specifically, the partition identifier may be added to the task name to obtain the task name of the subtask, for example, the partition identifier may be substituted for "XX" in the scheduling identifier to obtain the task names "AB 4001" and AB4002 "… … AB 4015" of the subtask, and then the subtasks corresponding to the respective partitions are processed according to the partition identifier in the task name of the subtask. Further, for each subtask, if the preset parallelism of task execution is 8, the subtask can be split into 8 tasks, for the subtask "AB 4001", the "x" in the schedule identifier can be modified to obtain the task names "AB400101, AB400102 … … AB400108" after the re-splitting, the other subtasks can be split correspondingly, each split subtask can be executed in parallel, and after the split subtasks are executed, the next task is executed.
In a specific application scenario, the method provided in the embodiment of the present disclosure may be applied to a banking enterprise, where the banking enterprise may include a headquarter and branches of various nationwide provinces. Take the task flow file shown in fig. 3 as an example. The server can read the execution sequence of the tasks and schedule the strategies corresponding to the tasks according to the task names. Specifically, when task processing starts, the task "AB100000" may be executed first, and the scheduling identifier "0000" of the task determines that the scheduling policy corresponding to the task is that the task is scheduled to be executed, and then the next task is executed after the task is executed, that is, the task is directly executed, and then the next task is executed after the task is executed. After the task 'AB 100000' is executed, executing the task 'AB 20XX 00', determining that a scheduling strategy corresponding to the task is to split the task into a plurality of subtasks with preset number according to a scheduling identifier 'XX 00' of the task, and executing the next task after the plurality of subtasks are executed; if the banking enterprise comprises 10 provincial branches, the database can comprise the partitions corresponding to the branches, the partitions respectively store the data of the branches, the task AB20XX00 can be split into 10 subtasks AB200100 and AB200200 … … AB201000, the subtasks corresponding to the partitions are processed according to the partition identifiers in the task names of the subtasks, and the next task is executed after the execution of the subtasks of the branches is completed. After the task 'AB 20XX 00' is completely executed, executing the tasks 'AB 300000' and 'AB 4000' simultaneously, determining that the scheduling strategy corresponding to the task 'AB 300000' is scheduling strategy after the task is completely executed, and executing the next task; the scheduling identifier "00" of the task "AB 4000" determines that the scheduling policy corresponding to the task is to split the task into a plurality of subtasks, execute the plurality of subtasks in parallel, execute the next task after the plurality of subtasks are executed, if the parallelism of task execution set in advance is 8, split the task into 8 subtasks "AB400001, AB400002 … … AB400008", and end the task processing after the execution of both tasks "AB300000" and "AB 4000".
S140: generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
In some embodiments, the task scheduling logic relationship characterizes how tasks are scheduled, e.g., in what order tasks are performed, how tasks are assigned, etc. After determining the scheduling policy corresponding to each task, a task scheduling file including a task scheduling logical relationship may be generated in combination with the execution order of each task. The task scheduling file may be a recognizable file of a TWS tool, where each field in the task scheduling file and an attribute of each field represent a task scheduling logic relationship, and the TWS tool may identify each field in the task scheduling file and a task scheduling logic relationship represented by an attribute of each field, and schedule each task according to the task scheduling logic relationship in the task scheduling file.
The task scheduling file generation method provided by the embodiment of the specification can acquire a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship. According to the method provided by the embodiment of the specification, the task flow file of the abstract operation logic relationship in the program version is processed and analyzed into the text which can be identified by the TWS tool in an automatic mode, so that the task scheduling processing efficiency is improved, and meanwhile, the risk of manual operation errors is reduced.
Fig. 4 is a schematic functional structure diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device may include a memory and a processor.
In some embodiments, the memory may be used to store the computer program and/or module, and the processor may implement various functions of task scheduling file generation by running or executing the computer program and/or module stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the user terminal. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (APPlication Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor may execute the computer instructions to implement the steps of: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
In the embodiments of the present disclosure, the specific functions and effects of the electronic device may be explained in comparison with other embodiments, which are not described herein.
Fig. 5 is a schematic functional structural diagram of a task scheduling file generating device according to an embodiment of the present disclosure, where the device may specifically include the following structural modules.
An obtaining module 510, configured to obtain a task flow file;
a reading module 520, configured to read an execution sequence of each task and a task name of each task from the task flow file;
a determining module 530, configured to determine a scheduling policy corresponding to each task according to a task name of each task; the scheduling strategy characterizes the execution mode of the task;
a generating module 540, configured to generate a task scheduling file based on a scheduling policy corresponding to each task and an execution order of each task, so as to schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
The embodiments of the present specification also provide a computer-readable storage medium of a task scheduling file generating method, the computer-readable storage medium storing computer program instructions that when executed implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file includes a task scheduling logical relationship.
In the present embodiment, the storage medium includes, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk (HDD), or a Memory Card (Memory Card). The memory may be used to store the computer program and/or the module, and the memory may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the user terminal, etc. Further, the memory may include a high-speed random access memory, and may also include a nonvolatile memory. In the embodiment of the present disclosure, the functions and effects specifically implemented by the program instructions stored in the computer readable storage medium may be explained in comparison with other embodiments, which are not described herein.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments and the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
Those skilled in the art, after reading this specification, will recognize without undue burden that any and all of the embodiments set forth herein can be combined, and that such combinations are within the scope of the disclosure and protection of the present specification.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not only one, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (AlteraHardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog2 are most commonly used at present. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
From the above description of embodiments, it will be apparent to those skilled in the art that the present description may be implemented in software plus a necessary general purpose hardware platform. Based on this understanding, the technical solution of the present specification may be embodied in essence or a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present specification.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The specification is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Although the present specification has been described by way of example, it will be appreciated by those skilled in the art that there are many variations and modifications to the specification without departing from the spirit of the specification, and it is intended that the appended claims encompass such variations and modifications as do not depart from the spirit of the specification.

Claims (9)

1. A method for generating a task scheduling file, the method comprising:
acquiring a task flow file;
reading the execution sequence of each task and the task name of each task from the task flow file;
determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task;
generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file comprises a task scheduling logic relationship;
wherein, for each task in the task flow file, the task name consists of a task identifier and a scheduling identifier; the task identifier plays a role in identifying a task, and the scheduling identifier is used for identifying a scheduling strategy and determining the scheduling strategy corresponding to the task according to the scheduling identifier;
wherein the scheduling policy includes at least one of:
after the completion of the task execution, executing the next task;
splitting a task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed;
the generated task scheduling file is a TWS tool identifiable file, so that the TWS tool can conveniently schedule each task according to task scheduling logic relations in the task scheduling file.
2. The method of claim 1, wherein the task flow file is a file edited by a VISIO.
3. The method of claim 1, wherein the task name comprises a schedule identification; the scheduling identifier is used for identifying a scheduling strategy; correspondingly, determining a scheduling strategy corresponding to the task according to the scheduling identifier.
4. The method of claim 1, wherein splitting the task into a plurality of sub-tasks, and performing a next task after the plurality of sub-tasks are performed comprises:
splitting a task into a plurality of subtasks with preset quantity, and executing the next task after the subtasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database.
5. The method of claim 1, wherein splitting the task into a plurality of sub-tasks, and performing a next task after the plurality of sub-tasks are performed comprises:
presetting the parallelism of task execution; the parallelism characterizes the maximum number of tasks to be executed in parallel;
splitting a task into a plurality of subtasks; the number of subtasks is equal to the parallelism;
and executing the plurality of subtasks in parallel, and executing the next task after the plurality of subtasks are executed.
6. The method of claim 1, wherein splitting the task into a plurality of sub-tasks, and performing a next task after the plurality of sub-tasks are performed comprises:
splitting the task into a plurality of subtasks of a preset number; wherein the data of the plurality of subtasks are stored in different partitions in the database;
splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; the number of the tasks obtained after splitting each subtask is equal to the parallelism.
7. A task scheduling file generating apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the task flow file;
the reading module is used for reading the execution sequence of each task and the task name of each task from the task flow file;
the determining module is used for determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task;
the generating module is used for generating a task scheduling file based on a scheduling strategy corresponding to each task and the execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file comprises a task scheduling logic relationship;
wherein, for each task in the task flow file, the task name consists of a task identifier and a scheduling identifier; the task identifier plays a role in identifying a task, and the scheduling identifier is used for identifying a scheduling strategy and determining the scheduling strategy corresponding to the task according to the scheduling identifier;
wherein the scheduling policy includes at least one of:
after the completion of the task execution, executing the next task;
splitting a task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed;
the generated task scheduling file is a TWS tool identifiable file, so that the TWS tool can conveniently schedule each task according to task scheduling logic relations in the task scheduling file.
8. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file comprises a task scheduling logic relationship; wherein, for each task in the task flow file, the task name consists of a task identifier and a scheduling identifier; the task identifier plays a role in identifying a task, and the scheduling identifier is used for identifying a scheduling strategy and determining the scheduling strategy corresponding to the task according to the scheduling identifier; wherein the scheduling policy includes at least one of: after the completion of the task execution, executing the next task; splitting a task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed; the generated task scheduling file is a TWS tool identifiable file, so that the TWS tool can conveniently schedule each task according to task scheduling logic relations in the task scheduling file.
9. A computer-readable storage medium having stored thereon computer instructions that, when executed, implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy characterizes the execution mode of the task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to conveniently schedule each task according to the task scheduling file; wherein the task scheduling file comprises a task scheduling logic relationship; wherein, for each task in the task flow file, the task name consists of a task identifier and a scheduling identifier; the task identifier plays a role in identifying a task, and the scheduling identifier is used for identifying a scheduling strategy and determining the scheduling strategy corresponding to the task according to the scheduling identifier; wherein the scheduling policy includes at least one of: after the completion of the task execution, executing the next task; splitting a task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed; the generated task scheduling file is a TWS tool identifiable file, so that the TWS tool can conveniently schedule each task according to task scheduling logic relations in the task scheduling file.
CN202010902677.XA 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium Active CN112035230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902677.XA CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902677.XA CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112035230A CN112035230A (en) 2020-12-04
CN112035230B true CN112035230B (en) 2023-08-18

Family

ID=73586650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902677.XA Active CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112035230B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326117B (en) * 2021-07-15 2021-10-29 中国电子科技集团公司第十五研究所 Task scheduling method, device and equipment
CN113626173B (en) * 2021-08-31 2023-12-12 阿里巴巴(中国)有限公司 Scheduling method, scheduling device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038559A (en) * 2006-09-11 2007-09-19 中国工商银行股份有限公司 Batch task scheduling engine and dispatching method
CN103838625A (en) * 2014-02-27 2014-06-04 中国工商银行股份有限公司 Data interaction method and system
CN104933618A (en) * 2015-06-03 2015-09-23 中国银行股份有限公司 Monitoring method and apparatus for batch work operation data of core banking system
CN106779582A (en) * 2016-11-24 2017-05-31 中国银行股份有限公司 A kind of TWS flows collocation method and device
CN110134598A (en) * 2019-05-05 2019-08-16 中国银行股份有限公司 A kind of batch processing method, apparatus and system
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038559A (en) * 2006-09-11 2007-09-19 中国工商银行股份有限公司 Batch task scheduling engine and dispatching method
CN103838625A (en) * 2014-02-27 2014-06-04 中国工商银行股份有限公司 Data interaction method and system
CN104933618A (en) * 2015-06-03 2015-09-23 中国银行股份有限公司 Monitoring method and apparatus for batch work operation data of core banking system
CN106779582A (en) * 2016-11-24 2017-05-31 中国银行股份有限公司 A kind of TWS flows collocation method and device
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium
CN110134598A (en) * 2019-05-05 2019-08-16 中国银行股份有限公司 A kind of batch processing method, apparatus and system

Also Published As

Publication number Publication date
CN112035230A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN107239324B (en) Service flow processing method, device and system
CN107016029B (en) Method, device and system for processing service data
CN112035230B (en) Task scheduling file generation method, device and storage medium
CN114925084B (en) Distributed transaction processing method, system, equipment and readable storage medium
CN111492344B (en) System and method for monitoring execution of Structured Query Language (SQL) queries
US8146085B2 (en) Concurrent exception handling using an aggregated exception structure
US20090248186A1 (en) Methods and Systems for Matching Configurable Manufacturing Capacity Requirements and Availability
US11449407B2 (en) System and method for monitoring computing platform parameters and dynamically generating and deploying monitoring packages
CN111143461B (en) Mapping relation processing system, method and electronic equipment
US11262986B2 (en) Automatic software generation for computer systems
US9298473B2 (en) System and method for a generic object access layer
CN110908644A (en) Configuration method and device of state node, computer equipment and storage medium
CN110569315A (en) Data processing method and device based on data warehouse
CN111881025B (en) Automatic test task scheduling method, device and system
CN114217790A (en) Interface scheduling method and device, electronic equipment and medium
CN109857380B (en) Workflow file compiling method and device
CN113971074A (en) Transaction processing method and device, electronic equipment and computer readable storage medium
US20180046966A1 (en) System and method for analyzing and prioritizing issues for automation
CN116301758B (en) Rule editing method, device, equipment and medium based on event time points
CN111414162B (en) Data processing method, device and equipment thereof
CN111984454B (en) Task timeout monitoring method, device and storage medium
CN115964075B (en) Application export import method and device, computer equipment and storage medium
WO2022199387A1 (en) File processing method and system, and computer device and medium
CN115062060A (en) Method for improving spring-batch framework batch processing execution efficiency
CN114048259A (en) Data export method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant