CN112035230A - Method and device for generating task scheduling file and storage medium - Google Patents

Method and device for generating task scheduling file and storage medium Download PDF

Info

Publication number
CN112035230A
CN112035230A CN202010902677.XA CN202010902677A CN112035230A CN 112035230 A CN112035230 A CN 112035230A CN 202010902677 A CN202010902677 A CN 202010902677A CN 112035230 A CN112035230 A CN 112035230A
Authority
CN
China
Prior art keywords
task
scheduling
file
subtasks
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010902677.XA
Other languages
Chinese (zh)
Other versions
CN112035230B (en
Inventor
殷昊
尉迟美格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010902677.XA priority Critical patent/CN112035230B/en
Publication of CN112035230A publication Critical patent/CN112035230A/en
Application granted granted Critical
Publication of CN112035230B publication Critical patent/CN112035230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An embodiment of the present specification provides a method, an apparatus, and a storage medium for generating a task scheduling file, where the method includes: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; the task scheduling file comprises a task scheduling logic relation, so that the task flow file is automatically identified, and the task scheduling processing efficiency is improved.

Description

Method and device for generating task scheduling file and storage medium
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a method and a device for generating a task scheduling file and a storage medium.
Background
Currently, with the development of computer technology, many current banking data can be processed by a core banking system. The Core Banking System is an electronic System that performs accounting processing, satisfies the comprehensive teller System, and provides 24-hour service, centering on a client. The core banking system needs to schedule a plurality of jobs when processing a batch job.
The TWS (Tivoli Workload scheduler) tool is a batch job operation scheduling tool based on an IBM host, and performs scheduling operation on jobs according to a set job operation logical relationship. In the existing work, the batch operation logic relation files in the versions are read in a manual mode, and then the operation modules are processed, edited, added and deleted in the TWS tool to carry out manual maintenance of operation one by one.
In a core banking system, along with the continuous development of services, the functions of the system are continuously expanded and perfected, the operation in batch process at night is continuously increased, and at present, the core system has tens of thousands of batch operations, and the upgrading work related to the scheduling relationship of the batch operations is more and more complicated because new versions are produced every time.
Along with the continuous increase of the operation quantity and the increase of the concurrency split, the complexity of the logic relation graph is increased in a geometric multiple mode, the efficiency of a mode of manually reading and manually adding and deleting editing operation in a TWS tool is lower and lower, and meanwhile, the error risk during manual editing is greatly increased.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a method, an apparatus, and a storage medium for generating a task scheduling file, so as to implement automatic identification of a task flow file and improve efficiency of task scheduling processing.
In order to solve the above problem, an embodiment of the present specification provides a method for generating a task scheduling file, where the method includes: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
In order to solve the above problem, an embodiment of the present specification further provides a task scheduling file generating apparatus, where the apparatus includes: the acquisition module is used for acquiring a task flow file; the reading module is used for reading the execution sequence of each task and the task name of each task from the task flow file; the determining module is used for determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; the generating module is used for generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
In order to solve the above problem, an embodiment of the present specification further provides an electronic device, including: a memory for storing a computer program; a processor for executing the computer program to implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
To solve the above problem, embodiments of the present specification further provide a computer-readable storage medium having stored thereon computer instructions, which when executed, implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
According to the technical scheme provided by the embodiment of the specification, the task flow file can be acquired; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; the task scheduling file comprises a task scheduling logic relation, so that the task flow file is automatically identified, and the task scheduling processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the specification, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a method for generating a task scheduling file according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a task flow file according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a task flow file according to an embodiment of the present disclosure;
fig. 4 is a functional structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a functional structure diagram of a task scheduling file generating device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort shall fall within the protection scope of the present specification.
In the embodiment of the present specification, the batch may include that the bank background system runs a series of jobs at night to check, count, generate reports, perform data synchronization between systems and the like on the transaction information, customer information and account information of the same day (or the same month and the same season). A job may be a step in a batch process, may be a function point, or may be a small set of function points. Each job in the batch processing is required to be processed in the batch processing process, wherein each job in the batch processing is equivalent to one task, and when the tasks are processed completely, the batch processing is indicated to be completed.
In the embodiment of the present specification, the IBM mainframe is a mainframe of IBM corporation, running a ZOS system. The TWS tool is a batch job operation scheduling tool based on an IBM host, and performs scheduling operation of jobs according to the set job operation logical relationship.
In a core banking system, with continuous development of services, system functions are continuously expanded and improved, jobs running in batch at night are continuously increased, a plurality of jobs need to be scheduled when batch jobs are processed, and limited service resources are utilized as much as possible to perform batch processing. Generally, a worker writes a task flow file including a task name, a task execution sequence and the like, but the TWS tool cannot directly identify the task flow file, so that the content in the task flow file is converted into a file that the TWS tool can identify. In the existing work, the task flow files in the versions are read in a manual mode, and then the task flow files are processed, edited, added and deleted in the TWS tool to carry out manual maintenance of one-by-one operation. However, with the increasing of the number of tasks and the increasing of the splitting of the concurrency degree, the complexity of the task flow files is increased in a geometric multiple, the efficiency of the mode of manually reading and manually adding and deleting editing work in the TWS tool is lower and lower, and meanwhile, the error risk during manual editing is greatly increased. Considering that if the task flow file of the abstract operation logic relationship in the program version is processed and analyzed into the text which can be recognized by the TWS tool in an automatic mode, the task relationship editing work in the TWS tool by a large amount of manpower is expected to be reduced, the efficiency is improved, and meanwhile, the risk of manual operation errors is reduced.
Please refer to fig. 1. The embodiment of the specification provides a method for generating a task scheduling file. In the embodiment of the present specification, a main body for executing the task scheduling file generating method may be an electronic device having a logical operation function, and the electronic device may be a server. The server may be an electronic device having a certain arithmetic processing capability. Which may have a network communication unit, a processor, a memory, etc. Of course, the server is not limited to the electronic device having a certain entity, and may be software running in the electronic device. The server may also be a distributed server, which may be a system with multiple processors, memory, network communication modules, etc. operating in coordination. Alternatively, the server may also be a server cluster formed by several servers. The method may include the following steps.
S110: and acquiring a task flow file.
In some embodiments, the task flow file may be a pre-written file, and the task flow file may include information about task names of the plurality of tasks, an execution order of the plurality of tasks, and the like.
In some embodiments, as shown in FIG. 2, the task flow file may be a file edited via VISIO.
In some embodiments, the user may upload a task flow file in the server. The server can receive the task flow files uploaded by the user. For example, the server may provide an interactive interface to a user, where the user may upload task flow files. The server can receive the task flow files uploaded by the user. Or, the user can also upload the task flow file in the client. The client can receive the task flow files uploaded by the user and send the task flow files to the server. The server may receive the task flow file. For example, the client may provide an interactive interface to a user, where the user may upload task flow files. The client can receive the task flow files uploaded by the user and send the task flow files to the server. The client may be, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The client may be capable of communicating with the server, for example, via a wired network and/or a wireless network.
S120: and reading the execution sequence of each task and the task name of each task from the task flow file.
In some embodiments, the server may read the execution order of each task and the task name of each task from the task flow file. Specifically, the server may read an element in the task flow file edited by the VISIO using a java program, and then process the read result to obtain an execution sequence of each task and a task name of each task.
S130: determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling policy characterizes the execution mode of the task.
In some embodiments, the scheduling policy characterizes how the task is executed, for example, how the task is scheduled and how the task is executed.
In some embodiments, the task name comprises a scheduling identification; the scheduling identification is used for identifying a scheduling strategy; correspondingly, a scheduling strategy corresponding to the task is determined according to the scheduling identifier. Specifically, for each task in the task flow file, the task name may be composed of a task identifier and a scheduling identifier. The task identifier plays a role in identifying the task, and the scheduling identifier can be used for identifying the scheduling policy. For example, for a task with a task name "AB 100000", the first four bits "AB 10" may be a task identifier, and the last four bits "0000" may be a scheduling identifier, where the task identifiers of different tasks are different, and the scheduling identifiers of different tasks may be the same or different. For example, for a task with a task name "AB 100000" and a task with a task name "AB 200000", the two tasks are different tasks, but the scheduling identifications of the two tasks are the same, which may indicate that the two tasks have the same scheduling policy; for a task with a task name of "AB 100000" and a task with a task name of "AB 20XX 00", both are different tasks and both have different scheduling policies. Of course, the above description only gives the composition of the task identifier and the scheduling identifier in the task name by way of example, and the task identifier and the scheduling identifier may also be other numbers, letters, symbols, or a combination of numbers, letters, and symbols, which is not limited in this specification.
In some embodiments, the scheduling policy may include that after the task is scheduled to be executed, the next task is executed; and splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed. For example, for a task with a scheduling identifier of "0000", the corresponding scheduling policy may be to execute the next task after the task is started to be executed; for scheduling identifiers other than "0000", such as scheduling identifiers "00 x", "XX 00", "XX", etc., the corresponding scheduling policy may be to split the task into a plurality of subtasks, and execute the next task after the plurality of subtasks have been executed. Of course, the corresponding relationship between the scheduling identifier and the scheduling policy is not limited to the above example, and other modifications are possible for those skilled in the art in light of the technical spirit of the present application, but all that can be achieved by the scheduling identifier and the scheduling policy is covered by the scope of the present application as long as the achieved function and effect are the same as or similar to the present application.
In some embodiments, for a banking enterprise, it may typically include a head office and nationwide provincial branches, with the data for the head office and the provincial branches being stored in different partitions of the database. Some tasks can be directly executed, and the next task can be executed after the execution is finished; however, for some tasks, the data of the tasks are stored in different partitions of the database, in order to improve the execution efficiency of the tasks, the tasks can be divided into a plurality of subtasks, and the next task can be executed after the execution of each subtask is finished; in order to improve the resource utilization efficiency of the server, the task can be divided into a plurality of subtasks, the plurality of subtasks are executed in parallel, and the next task is executed after the plurality of subtasks are executed; for a more complex task or a task with a larger data size, after the task is split into a plurality of subtasks, the subtasks are also required to be split again, and after the split subtasks are completely executed, the next task is executed.
In some embodiments, the splitting the task into a plurality of sub-tasks, and the executing the next task after the plurality of sub-tasks are completely executed may include: splitting a task into a plurality of subtasks with a preset number, and executing a next task after the plurality of subtasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database. Specifically, the database may include a plurality of different partitions, the preset number may be the number of the partitions in the database, the different partitions in the database have partition identifiers, each partition identifier uniquely identifies one partition, and the partition identifier may be composed of two digits. For a task with a task name of "AB 20XX 00", the server may determine, according to the scheduling identifier "XX 00", that the scheduling policy corresponding to the task is to split the task into a plurality of sub-tasks of a preset number, and execute a next task after the plurality of sub-tasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database. For example, if the partition id is 01-15, and 15 partitions are identified, the task may be split into 15 subtasks, and each subtask binds a different partition id. Specifically, the partition identifier may be added to the task name to obtain the task name of the sub-task, for example, the partition identifier may replace "XX" in the scheduling identifier to obtain the task names "AB 200100, AB200200 … … AB 201500" of the sub-tasks, and then the sub-tasks corresponding to the respective partitions are processed according to the partition identifiers in the task names of the sub-tasks, and the next task is executed after the execution of the plurality of sub-tasks is completed.
In some embodiments, the splitting the task into a plurality of sub-tasks, and the executing the next task after the plurality of sub-tasks are completely executed may include: presetting the parallelism of task execution; the parallelism represents the maximum number of tasks to be executed in parallel; splitting a task into a plurality of subtasks; the number of the subtasks is equal to the parallelism; and executing the plurality of subtasks in parallel. Specifically, for a task with a task name "AB 3000", the server may determine, according to the scheduling identifier "00", that the scheduling policy corresponding to the task is the parallelism of the preset task execution; the parallelism represents the maximum number of tasks to be executed in parallel; splitting a task into a plurality of subtasks; the number of the subtasks is equal to the parallelism; and executing the plurality of subtasks in parallel. For example, if the preset parallelism of the task execution is 8, the task may be split into 8 subtasks, the task names of the subtasks may be set according to the parallelism, for example, "×" in the scheduling identifier may be modified to obtain the task names "AB 300001 and AB300002 … … AB 300008" of the subtasks, the plurality of subtasks are executed in parallel, and the next task is executed after the plurality of subtasks are executed.
In some embodiments, the splitting the task into a plurality of sub-tasks, and the executing the next task after the plurality of sub-tasks are completely executed may further include: splitting a task into a plurality of subtasks with a preset number; wherein the data of the plurality of subtasks is stored in different partitions in a database; splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; and the number of tasks obtained after each subtask is split is equal to the parallelism. Specifically, for a task with a task name "AB 40 XX", the server may determine, according to the scheduling identifier "XX", that the scheduling policy corresponding to the task is to split the task into a preset number of sub-tasks; wherein the data of the plurality of subtasks is stored in different partitions in a database; splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; and the number of tasks obtained after each subtask is split is equal to the parallelism. For example, if the partition id is 01-15, and 15 partitions are identified, the task may be split into 15 subtasks, and each subtask binds a different partition id. Specifically, the partition identifier may be added to the task name to obtain the task name of the sub-task, for example, the partition identifier may replace "XX" in the scheduling identifier to obtain the task names "AB 4001 × and AB4002 × … … AB4015 × of the sub-tasks, and then the sub-tasks corresponding to the respective partitions are processed according to the partition identifier in the task name of the sub-task. Further, for each subtask, if the preset parallelism of task execution is 8, the subtask may be further split into 8 tasks, and for the subtask "AB 4001", the "x" in the scheduling flag may be modified to obtain the task names "AB 400101 and AB400102 … … AB 400108" after being split again, and corresponding splitting may be performed on other subtasks, each split subtask is executed in parallel, and the next task is executed after the multiple split subtasks are executed.
In a specific application scenario, the method provided by the embodiment of the present disclosure may be applied to a bank type enterprise, where the bank type enterprise may include a head office and national provinces and branches. Take the task flow file shown in fig. 3 as an example. The server can read the execution sequence of the tasks and schedule strategies corresponding to the tasks according to the task names. Specifically, when the task processing starts, the task "AB 100000" may be executed first, and after the scheduling identifier "0000" of the task determines that the scheduling policy corresponding to the task is that the task is scheduled to be executed completely, the next task is executed, that is, the task is directly executed, and after the task is executed completely, the next task is executed. After the task AB100000 is executed, the task AB20XX00 is executed, the scheduling identifier XX00 of the task determines that the scheduling policy corresponding to the task is to divide the task into a plurality of subtasks with preset number, and the next task is executed after the plurality of subtasks are executed; if the bank enterprise comprises 10 provinces of branches, the database can comprise the corresponding partitions of the branches, the partitions respectively store the data of the branches, the task AB20XX00 can be divided into 10 subtasks AB200100 and AB200200 … … AB201000, the subtasks corresponding to the partitions are processed according to the partition identifications in the task names of the subtasks, and the next task is executed after the execution of the subtasks of the branches is finished. After the task "AB 20XX 00" is executed, the tasks "AB 300000" and "AB 4000 x" are executed at the same time, and after the scheduling identifier "0000" of the task "AB 300000" determines that the scheduling policy corresponding to the task is that the scheduling task is completely executed, the next task is executed; determining that the scheduling policy corresponding to the task is to split the task into a plurality of subtasks by using the scheduling identifier "00" of the task "AB 4000", execute the plurality of subtasks in parallel, execute the next task after the plurality of subtasks are completely executed, if the preset parallelism of the task execution is 8, split the task into 8 subtasks "AB 400001 and AB400002 … … AB 400008", and finish the task processing after the tasks "AB 300000" and "AB 4000" are completely executed.
S140: generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
In some embodiments, the task scheduling logical relationship characterizes how the tasks are scheduled, such as in what order the tasks are executed, how the tasks are distributed, and the like. After the scheduling policy corresponding to each task is determined, a task scheduling file including a task scheduling logical relationship may be generated in combination with the execution sequence of each task. The task scheduling file may be an identifiable file of a TWS tool, each field and attribute of each field in the task scheduling file represent a task scheduling logical relationship, and the TWS tool may identify the task scheduling logical relationship represented by each field and attribute of each field in the task scheduling file and schedule each task according to the task scheduling logical relationship in the task scheduling file.
The method for generating the task scheduling file provided by the embodiment of the specification can acquire the task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation. The method provided by the embodiment of the specification processes and analyzes the task flow file with the abstract operation logic relationship in the program version into the text which can be identified by the TWS tool by using an automatic mode, so that the task scheduling processing efficiency is improved, and the risk of manual operation errors is reduced.
Fig. 4 is a functional structure diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device may include a memory and a processor.
In some embodiments, the memory may be used to store the computer programs and/or modules, and the processor may implement various functions of task schedule file generation by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the user terminal. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an APPlication Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor may execute the computer instructions to perform the steps of: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
In the embodiments of the present description, the functions and effects specifically realized by the electronic device may be explained in comparison with other embodiments, and are not described herein again.
Fig. 5 is a functional structure diagram of a task scheduling file generating device according to an embodiment of the present disclosure, where the device may specifically include the following structural modules.
An obtaining module 510, configured to obtain a task flow file;
a reading module 520, configured to read an execution sequence of each task and a task name of each task from the task flow file;
a determining module 530, configured to determine, according to the task name of each task, a scheduling policy corresponding to each task; the scheduling strategy represents an execution mode of a task;
a generating module 540, configured to generate a task scheduling file based on a scheduling policy corresponding to each task and an execution sequence of each task, so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
The embodiment of the present specification further provides a computer-readable storage medium of a task scheduling file generation method, where the computer-readable storage medium stores computer program instructions, and when the computer program instructions are executed, the computer program instructions implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
In the embodiments of the present specification, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). The memory may be used for storing the computer programs and/or modules, and the memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the user terminal, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory. In the embodiments of the present description, the functions and effects specifically realized by the program instructions stored in the computer-readable storage medium may be explained in contrast to other embodiments, and are not described herein again.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same or similar parts in each embodiment may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, as for the apparatus embodiment and the apparatus embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and reference may be made to some descriptions of the method embodiment for relevant points.
After reading this specification, persons skilled in the art will appreciate that any combination of some or all of the embodiments set forth herein, without inventive faculty, is within the scope of the disclosure and protection of this specification.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardbyscript Description Language (vhr Description Language), and vhjhd (Hardware Description Language), which is currently used by most popular version-software. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present specification can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the present specification may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification that do not depart from the spirit of the specification, and it is intended that the appended claims include such variations and modifications that do not depart from the spirit of the specification.

Claims (11)

1. A task scheduling file generation method is characterized by comprising the following steps:
acquiring a task flow file;
reading the execution sequence of each task and the task name of each task from the task flow file;
determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task;
generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
2. The method of claim 1, wherein the task flow file is a file edited via VISIO.
3. The method of claim 1, wherein the task name comprises a scheduling identifier; the scheduling identification is used for identifying a scheduling strategy; correspondingly, a scheduling strategy corresponding to the task is determined according to the scheduling identifier.
4. The method of claim 1, wherein the scheduling policy comprises at least one of:
after the starting task is executed, executing the next task;
and splitting the task into a plurality of subtasks, and executing the next task after the plurality of subtasks are executed.
5. The method of claim 4, wherein the splitting the task into a plurality of sub-tasks, and wherein executing the next task after the plurality of sub-tasks are completed comprises:
splitting a task into a plurality of subtasks with a preset number, and executing a next task after the plurality of subtasks are executed; wherein the data of the plurality of subtasks is stored in different partitions in the database.
6. The method of claim 4, wherein the splitting the task into a plurality of sub-tasks, and wherein executing the next task after the plurality of sub-tasks are completed comprises:
presetting the parallelism of task execution; the parallelism represents the maximum number of tasks to be executed in parallel;
splitting a task into a plurality of subtasks; the number of the subtasks is equal to the parallelism;
and executing the plurality of subtasks in parallel, and executing the next task after the plurality of subtasks are executed.
7. The method of claim 4, wherein the splitting the task into a plurality of sub-tasks, and wherein executing the next task after the plurality of sub-tasks are completed comprises:
splitting a task into a plurality of subtasks with a preset number; wherein the data of the plurality of subtasks is stored in different partitions in a database;
splitting each subtask according to the preset parallelism of task execution, executing each split subtask in parallel, and executing the next task after the split subtasks are executed; and the number of tasks obtained after each subtask is split is equal to the parallelism.
8. The method of claim 1, wherein the generated task scheduling file is a TWS tool recognizable file, so that the TWS tool schedules the tasks according to the task scheduling logic relationship in the task scheduling file.
9. A task schedule file generation apparatus, the apparatus comprising:
the acquisition module is used for acquiring a task flow file;
the reading module is used for reading the execution sequence of each task and the task name of each task from the task flow file;
the determining module is used for determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task;
the generating module is used for generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
10. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
11. A computer readable storage medium having computer instructions stored thereon that when executed perform: acquiring a task flow file; reading the execution sequence of each task and the task name of each task from the task flow file; determining a scheduling strategy corresponding to each task according to the task name of each task; the scheduling strategy represents an execution mode of a task; generating a task scheduling file based on a scheduling strategy corresponding to each task and an execution sequence of each task so as to schedule each task according to the task scheduling file; and the task scheduling file comprises a task scheduling logic relation.
CN202010902677.XA 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium Active CN112035230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902677.XA CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902677.XA CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112035230A true CN112035230A (en) 2020-12-04
CN112035230B CN112035230B (en) 2023-08-18

Family

ID=73586650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902677.XA Active CN112035230B (en) 2020-09-01 2020-09-01 Task scheduling file generation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112035230B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326117A (en) * 2021-07-15 2021-08-31 中国电子科技集团公司第十五研究所 Task scheduling method, device and equipment
CN113626173A (en) * 2021-08-31 2021-11-09 阿里巴巴(中国)有限公司 Scheduling method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038559A (en) * 2006-09-11 2007-09-19 中国工商银行股份有限公司 Batch task scheduling engine and dispatching method
CN103838625A (en) * 2014-02-27 2014-06-04 中国工商银行股份有限公司 Data interaction method and system
CN104933618A (en) * 2015-06-03 2015-09-23 中国银行股份有限公司 Monitoring method and apparatus for batch work operation data of core banking system
CN106779582A (en) * 2016-11-24 2017-05-31 中国银行股份有限公司 A kind of TWS flows collocation method and device
CN110134598A (en) * 2019-05-05 2019-08-16 中国银行股份有限公司 A kind of batch processing method, apparatus and system
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038559A (en) * 2006-09-11 2007-09-19 中国工商银行股份有限公司 Batch task scheduling engine and dispatching method
CN103838625A (en) * 2014-02-27 2014-06-04 中国工商银行股份有限公司 Data interaction method and system
CN104933618A (en) * 2015-06-03 2015-09-23 中国银行股份有限公司 Monitoring method and apparatus for batch work operation data of core banking system
CN106779582A (en) * 2016-11-24 2017-05-31 中国银行股份有限公司 A kind of TWS flows collocation method and device
WO2020140683A1 (en) * 2019-01-04 2020-07-09 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, computer device, and storage medium
CN110134598A (en) * 2019-05-05 2019-08-16 中国银行股份有限公司 A kind of batch processing method, apparatus and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326117A (en) * 2021-07-15 2021-08-31 中国电子科技集团公司第十五研究所 Task scheduling method, device and equipment
CN113626173A (en) * 2021-08-31 2021-11-09 阿里巴巴(中国)有限公司 Scheduling method, device and storage medium
CN113626173B (en) * 2021-08-31 2023-12-12 阿里巴巴(中国)有限公司 Scheduling method, scheduling device and storage medium

Also Published As

Publication number Publication date
CN112035230B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110806923A (en) Parallel processing method and device for block chain tasks, electronic equipment and medium
US9250960B2 (en) Planning execution of tasks with dependency resolution
CN102810057A (en) Log recording method
CN111400011B (en) Real-time task scheduling method, system, equipment and readable storage medium
CN112035230B (en) Task scheduling file generation method, device and storage medium
CN111492344B (en) System and method for monitoring execution of Structured Query Language (SQL) queries
US8407713B2 (en) Infrastructure of data summarization including light programs and helper steps
CN114722119A (en) Data synchronization method and system
US20080320291A1 (en) Concurrent exception handling
CN107798111B (en) Method for exporting data in large batch in distributed environment
CN114691658A (en) Data backtracking method and device, electronic equipment and storage medium
US8612597B2 (en) Computing scheduling using resource lend and borrow
US8146085B2 (en) Concurrent exception handling using an aggregated exception structure
US20080082982A1 (en) Method, system and computer program for translating resource relationship requirements for jobs into queries on a relational database
US6944618B2 (en) Method, computer program product, and system for unloading a hierarchical database utilizing segment specific selection criteria
CN110908644A (en) Configuration method and device of state node, computer equipment and storage medium
EP2447830A1 (en) System and method for decoupling business logic and user interface via a generic object access layer
CN110989999A (en) Code generation method and device, electronic equipment and medium
CN109857380B (en) Workflow file compiling method and device
CN111309821B (en) Task scheduling method and device based on graph database and electronic equipment
CN113971074A (en) Transaction processing method and device, electronic equipment and computer readable storage medium
CN110347471B (en) Hierarchical display component system, display component calling method and device
CN113553098A (en) Method and device for submitting Flink SQL (structured query language) operation and computer equipment
CN111881025A (en) Automatic test task scheduling method, device and system
US20180046966A1 (en) System and method for analyzing and prioritizing issues for automation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant