CN113326117A - Task scheduling method, device and equipment - Google Patents

Task scheduling method, device and equipment Download PDF

Info

Publication number
CN113326117A
CN113326117A CN202110798469.4A CN202110798469A CN113326117A CN 113326117 A CN113326117 A CN 113326117A CN 202110798469 A CN202110798469 A CN 202110798469A CN 113326117 A CN113326117 A CN 113326117A
Authority
CN
China
Prior art keywords
flow
processed
task
description
description file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110798469.4A
Other languages
Chinese (zh)
Other versions
CN113326117B (en
Inventor
李常宝
曹禹
刘忠麟
付凯
武晓卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Original Assignee
CETC 15 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute filed Critical CETC 15 Research Institute
Priority to CN202110798469.4A priority Critical patent/CN113326117B/en
Publication of CN113326117A publication Critical patent/CN113326117A/en
Application granted granted Critical
Publication of CN113326117B publication Critical patent/CN113326117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Abstract

The embodiment of the specification discloses a task scheduling method, a task scheduling device and task scheduling equipment. The method comprises the following steps: acquiring a flow to be processed; describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed; and writing the flow description file into a task queue, and executing the flow to be processed. By adopting the task scheduling method provided by the embodiment of the specification, more data analysis task types and platforms can be compatible, frequent data falling in the big data analysis process can be avoided, and the execution efficiency of the process is improved.

Description

Task scheduling method, device and equipment
Technical Field
The present specification relates to the technical field of big data analysis, and in particular, to a task scheduling method, apparatus, and device.
Background
With the rapid development of the internet technology, the big data analysis and processing platform has a high use threshold, a built model is difficult to implement as an application, and the big data analysis and processing is inconvenient, so that the interactive modeling and analysis platform is applied. An interactive modeling analysis platform is a platform-level solution proposed for solving the problem of big data analysis and processing, but with the increasing complexity of big data analysis and processing requirements, it is often difficult to meet the complexity requirements of the simple flow executed by manually executing independent tasks or only by a timing execution tool; the current mainstream open-source big data task execution engine cannot realize comprehensive compatibility of different processing platforms; in addition, most task execution engines currently submit each task in the process to the task computing engine independently, which results in multiple task submissions and frequent data dropping in the process of executing the process, thereby greatly increasing the completion time of the process.
Therefore, a new method is needed, which can improve the big data analysis processing capability of the platform and realize the comprehensive compatibility of the big data analysis processing task.
Disclosure of Invention
The embodiment of the specification provides a task scheduling method, a task scheduling device and task scheduling equipment, which are used for solving the following technical problems: with the increasing complexity of the requirement of big data analysis and processing, it is often difficult to satisfy the complexity requirement of a simple flow which is executed manually by independent tasks or only by a timing execution tool; the current mainstream open-source big data task execution engine cannot realize comprehensive compatibility on the current increasingly complex big data analysis processing tasks; in addition, most task execution engines currently submit each task in the process to the task computing engine independently, which results in multiple task submissions and frequent data dropping in the process of executing the process, thereby greatly increasing the completion time of the process.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
the task scheduling method provided by the embodiment of the specification comprises the following steps:
acquiring a flow to be processed;
describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed;
and writing the flow description file into a task queue, and executing the flow to be processed.
Further, the describing the to-be-processed flow according to a preset description template to obtain a flow description file specifically includes:
and describing the process description information and the process section description information corresponding to the to-be-processed process and the task description information corresponding to the to-be-processed process according to the preset description template to obtain a process description file corresponding to the to-be-processed process, wherein the to-be-processed process is composed of at least one process section.
Further, the process description information and the process segment description information corresponding to the to-be-processed process include: the name of the flow and/or flow segment to be processed, the unique identifier of the flow segment corresponding to the flow to be processed, tasks included in the flow to be processed, the number of tasks included in the flow to be processed, the input and output of the flow and/or flow segment to be processed, the execution environment of the flow and/or flow segment to be processed, the priority of the flow and/or flow segment to be processed, and the dependency relationship between the flow and/or flow segment to be processed and other flows and/or flow segments;
the task description information corresponding to the flow to be processed comprises the name of the task included in the flow to be processed, the unique identification of the task, the input and output of the task, the execution environment of the task and the dependency relationship between the task and other tasks.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
judging whether the task included in the flow to be processed is a timing task, if the task included in the flow to be processed is the timing task, writing the flow description file and the timing type of the task included in the flow to be processed into the task queue, and executing the flow to be processed;
and if the task included in the flow to be processed is a non-timing task, directly writing the flow description file into the task queue, and executing the flow to be processed.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
reading the process description file, and writing the process description file into a task queue;
adopting DAG to analyze the flow description file in the task queue to generate flow DAG information;
performing flow packaging based on the flow DAG information to obtain a big data analysis flow;
and submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed.
Further, the process packaging is performed based on the process DAG information to obtain a big data analysis process, and the process specifically includes:
and based on the flow DAG information, performing flow sub-packaging according to the execution environment of the task included in the task description information in the flow description file to obtain a big data analysis flow.
Further, the submitting the big data analysis process to a corresponding execution platform, and executing the to-be-processed process specifically includes:
and submitting the big data analysis flow to a corresponding execution platform based on the execution environment of the task included in the task description information included in the big data analysis flow, and executing the flow to be processed.
An embodiment of this specification provides a task scheduling apparatus, including:
the acquisition module acquires a flow to be processed;
the editing module is used for describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is used for describing the flow to be processed and tasks included in the flow to be processed;
and the execution module writes the process description file into a task queue and executes the to-be-processed process.
An embodiment of the present specification further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a flow to be processed;
describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed;
and writing the flow description file into a task queue, and executing the flow to be processed.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects: more data analysis task types and platforms can be compatible, frequent tray falling of data in the big data analysis process can be avoided, and execution efficiency of the process is improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic diagram of a task scheduling method provided in an embodiment of the present specification;
FIG. 2 is an architectural diagram of task scheduling provided by embodiments of the present description;
fig. 3 is a framework diagram of a task scheduling method provided by an embodiment of the present specification;
fig. 4 is a schematic diagram of a task scheduling apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
Big data (big data), which refers to a data set that cannot be captured, managed and processed by a conventional software tool within a certain time range, is a massive, high-growth-rate and diversified information asset that needs a new processing mode to have stronger decision-making power, insight discovery power and process optimization capability. At present, the analysis processing of big data usually adopts a simple flow executed by manually executing an independent task or a timed execution tool crontab, and the method can not meet the requirement of a complex flow; the task execution engine of big data mainly adopts the task execution engine of open source big data, such as Apache Luigi, Apache Airflow, Apache Oozie and Azkaban, these task execution engines mainly carry on the scheduling and execution of the task on the basis of some kind of big data frame system, can't realize the comprehensive compatibility to the big data analysis processing task that the type is increasing complicated at present; and the task execution engine independently submits each task in the process to the task calculation engine, so that multiple task submissions and frequent data dropping in the process of executing the process are caused, and the completion time of the process is greatly increased. Based on this, the present specification provides a new task scheduling method.
Fig. 1 is a schematic diagram of a task scheduling method provided in an embodiment of the present specification, and as shown in fig. 1, the method includes:
step S101: and acquiring a flow to be processed.
In the embodiment of the present specification, the flow to be processed is a flow for performing big data analysis. The process is a process of completing a complete business behavior by two or more business steps. In short, a flow is a process composed of process nodes and execution modes in order.
Step S103: and describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed.
In an embodiment of this specification, the describing the to-be-processed flow according to a preset description template to obtain a flow description file specifically includes:
and describing the process description information and the process section description information corresponding to the to-be-processed process and the task description information corresponding to the to-be-processed process according to the preset description template to obtain a process description file corresponding to the to-be-processed process, wherein the to-be-processed process is composed of at least one process section.
In the embodiment of the present specification, the preset description template can implement establishment of a unified template depending on processes of different computing platforms, describe the processes and tasks included in the processes, and implement compatibility of different computing platforms.
In this embodiment of the present specification, a JSON file format may be used to describe flow description information and flow segment description information corresponding to a flow to be processed, and task description information corresponding to the flow to be processed, so as to obtain a flow description file corresponding to the flow to be processed, or an XML or Yaml file format may be used to describe, so as to obtain a flow description file corresponding to the flow to be processed, where the format of the flow description file does not form a specific limitation to the present application.
In the embodiments of the present specification, the flow includes two types, an experimental flow and a service flow. The experimental process is based on an interactive modeling analysis platform, is constructed in a mode of dragging and combining operators provided for the platform on an interface, and is composed of various general and specific field-oriented big data analysis operators according to the dependency relationship of the operators; the service process is based on an experimental process which is constructed and passes the test, and independent data service is provided after a series of environment and parameter configuration and release and deployment are carried out.
In the embodiment of the present specification, the tasks include a timing task and a non-timing task, where the timing task is a big data analysis process that needs to be executed at regular time; the non-timing type task is a big data analysis flow which does not need to be executed regularly.
In the embodiments of the present specification, a flow is composed of several flow segments, and each flow segment includes several tasks. In a specific embodiment, a process depending on the same platform in the same process may be used as a process segment, and of course, the process segment may also be customized by a user, or may be determined according to a business requirement. By adopting the flow segment mode, the flow segments of the same platform can be completely submitted, and the submission of a single task is avoided as much as possible, so that the frequent off-line of intermediate data is avoided, and the execution efficiency of the flow is improved.
In an embodiment of this specification, the flow description information and the flow segment description information corresponding to the flow to be processed include: the name of the flow and/or flow segment to be processed, the unique identifier of the flow segment corresponding to the flow to be processed, tasks included in the flow to be processed, the number of tasks included in the flow to be processed, the input and output of the flow and/or flow segment to be processed, the execution environment of the flow and/or flow segment to be processed, the priority of the flow and/or flow segment to be processed, and the dependency relationship between the flow and/or flow segment to be processed and other flows and/or flow segments;
the task description information corresponding to the flow to be processed comprises the name of the task included in the flow to be processed, the unique identification of the task, the input and output of the task, the execution environment of the task and the dependency relationship between the task and other tasks.
It should be particularly noted that, in the embodiment of the present specification, the unique identifier of a flow segment is an id of the flow segment, and the unique identifier of a task is an id of the task; the execution environment of the flow and/or flow segments to be processed comprises: a computing platform required by the process, and the total amount of required computing resources; the execution environment of the task is environment information on which the task is executed, such as a dependency package, the number of resources and the like; the input and output of the flow and/or the flow section to be processed refer to the input and output parameters and data of the flow and/or the flow section to be processed, and the input and output of the task refer to the input and output parameters and data of the task. In practical application, according to the execution environment of the flow and/or the flow segment to be processed in the flow description file and/or the execution environment of the task, the execution platform of the task can be determined, and the execution of the task on different platforms can be realized.
Step S105: and writing the flow description file into a task queue, and executing the flow to be processed.
In an embodiment of this specification, the writing the process description file into a task queue, and executing the to-be-processed process specifically includes:
judging whether the task included in the flow to be processed is a timing task, if the task included in the flow to be processed is the timing task, writing the flow description file and the timing type of the task included in the flow to be processed into the task queue, and executing the flow to be processed;
and if the task included in the flow to be processed is a non-timing task, directly writing the flow description file into the task queue, and executing the flow to be processed.
In an embodiment of this specification, the writing the process description file into a task queue, and executing the to-be-processed process specifically includes:
reading the process description file, and writing the process description file into a task queue;
adopting DAG to analyze the flow description file in the task queue to generate flow DAG information;
performing flow packaging based on the flow DAG information to obtain a big data analysis flow;
and submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed.
In this specification embodiment, the flow DAG information may represent dependencies between flows.
In an embodiment of this specification, the performing process encapsulation based on the process DAG information to obtain a big data analysis process specifically includes:
and based on the flow DAG information, performing flow sub-packaging according to the execution environment of the task included in the task description information in the flow description file to obtain a big data analysis flow.
In the embodiment of the present specification, the principle of DAG parsing is: if a directed graph cannot go from any vertex back to the point through several edges, the graph is a directed acyclic graph (DAG graph). Specifically, the tasks are queued to form a set, which is a DAG graph, each vertex is a task, and each edge represents a dependency.
In an embodiment of this specification, the submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed specifically includes:
and submitting the big data analysis flow to a corresponding execution platform based on the execution environment of the task included in the task description information included in the big data analysis flow, and executing the flow to be processed.
In the embodiment of the present specification, the execution platform may be Spark, Yarn, MapReduce, flex, kubernets, or the like. The specific type of the execution platform depends on the flow to be processed, and does not constitute a specific limitation to the present application.
In order to further understand the task scheduling method according to the embodiments of the present disclosure, the following description will be made in conjunction with an architecture diagram of task scheduling. Fig. 2 is an architecture diagram of task scheduling provided in an embodiment of the present disclosure, and as shown in fig. 2, a task scheduling method provided in an embodiment of the present disclosure includes six modules, which are a flow access module, a flow queue module, a scheduler module, a flow engine module, a database module, and an execution status monitoring module, and the six modules are specifically described below.
The process access is mainly used for receiving and analyzing a process information description file from the interactive modeling analysis platform, and storing the process information description file into a database, wherein the database can be a MySQL database.
The main function of the process queue is to store a queue to be executed, the form of the process queue may be a thread pool, a message queue, a database table, and the like, and the specific form of the process queue does not constitute a limitation of the present application. The flow queue can also receive the scheduling of the scheduler, and carry out the adjustment in the queue on the flow to be executed in the flow queue.
The scheduler is mainly responsible for scheduling the newly accessed flow and the flows added into the flow queue, and specifically, when the flow is accessed into the new flow, the newly accessed flow is submitted to a proper position in the flow queue; the flow of the execution time is reached, the flow description information file is read from a database (MySQL database) and submitted to a flow engine; and performing operations such as sequential adjustment, deletion, suspension and the like on the to-be-executed flow in the flow queue.
The flow engine is mainly used for assembling the flow to be executed and submitting the flow to a corresponding execution platform and consists of two sub-modules, namely a DAG analysis module and an operator engine module. The DAG analysis module analyzes the flow description information file stored in the MySQL data, the operator engine packages the flow according to the flow description information file, and the whole flow is submitted to an execution platform specified in the task information after the process is packaged. In the embodiments of the present disclosure, the execution platform may be compatible with various computing engines for executing the process, for example, computing engine 1, computing engine 2, and computing engine 3 … …, in particular embodiments, the computing engines are Spark, Flink, and Mapduce. It should be noted that the numbers of the calculation engine 1 and the calculation engine 2 are only schematic illustrations and do not have actual specific meanings.
Between the execution platform and the bottom layer computing resource, the system further comprises a middle layer, and the middle layer specifically comprises: the container management engine is used for managing and scheduling the bottom layer computing resources, and the resource scheduling engine is used for managing and scheduling the servers and the virtual machines in the bottom layer computing resources. In a specific embodiment, the container management engine may be kubernets, or may be another type of container management engine, and the resource scheduling engine may be Yarn, or may be another type of resource scheduling engine. The specific engine types of the container management engine and the resource management engine are not limited in this application.
The database, which may be a MySQL database, is mainly responsible for accessing task information in the task execution engine, and specifically, is responsible for storing process description information, and may include: the name, type, execution environment, priority, start and end times of the flow, DAG (directed acyclic graph) graph information, execution state of the task and flow, and the like.
And the execution state monitoring monitors the execution state of the process and the task by reading the feedback result of the execution information of the execution platform in real time.
In the embodiments of the present disclosure, the execution subject may be a server, a virtual machine, or a container. The execution body does not constitute a limitation of the present application.
For further understanding of the task scheduling method provided in the embodiment of the present specification, fig. 3 is a framework diagram of the task scheduling method provided in the embodiment of the present specification, and as shown in fig. 3, the task scheduling method provided in the embodiment of the present specification includes process access, process scheduling, process execution, and state monitoring. The task scheduling method will be described in detail below.
Flow access: and reading and analyzing the flow description file, and respectively storing the flow information and the task information into corresponding base tables of a database (MySQL). In the embodiment of the present specification, parsing the flow description file refers to parsing the flow description file into information that can be identified by the flow access module, and the method for parsing the flow description file does not constitute a limitation to the present application.
Flow scheduling: as mentioned above, the tasks include timed tasks and non-timed tasks, and therefore, it is necessary to determine whether an accessed task is a timed task, and if the accessed task is a timed task, the scheduler generates an execution plan for the timed task, writes the execution plan into a corresponding base table of a database (MySQL), then receives the scheduling of the scheduler, and writes the timed task into an appropriate position in a task queue; if the task is a non-timing type task, the scheduling of the scheduler is directly accepted and written into an appropriate position in the task queue.
The process is executed: when the flow in the task queue reaches the execution time, the execution of the flow is started, and the flow execution includes four sub-steps of flow analysis, flow encapsulation, flow submission and task execution, and is executed in a flow engine, which will be described in detail below.
Flow analysis: and a DAG (directed acyclic graph analysis) sub-module reads the process description file stored in the database, generates process DAG information and transmits an analysis result to the operator engine.
Packaging the flow: and the operator engine receives DAG information transmitted by the DAG analysis submodule and assembles the DAG information into a data analysis flow based on the DAG information and preset configuration parameters. The preset configuration parameters are environment parameters when the task is executed.
Submitting a flow: and after the process packaging is finished, submitting the whole process to a platform corresponding to the process, and executing the process. In one embodiment of the present specification, a process whose dependent environment is Spark is submitted to a Yarn scheduling platform at a time, and the Yarn scheduling platform schedules the process to a Spark calculation engine for execution. And meanwhile, updating the state of the corresponding flow in the database.
And (3) task execution: after the process is submitted, the whole process is submitted to a required computing platform, and at the moment, a plurality of subtasks forming the process are executed one by one according to preset parameters. In the embodiment of the present disclosure, the preset parameters include the dependency relationship (i.e., the parsed DAG information) and the configuration parameters (i.e., the execution environment of the task) of the task.
And monitoring the task while the process is executed, monitoring the execution request of the task through a result returned by the execution framework or the data service, feeding the monitoring result back to a corresponding base table in the database, and simultaneously displaying the execution state of the task in real time.
By adopting the task scheduling method provided by the embodiment of the specification, more data analysis task types and platforms can be compatible, frequent data falling in the big data analysis process can be avoided, and the execution efficiency of the process is improved.
The above details a task scheduling method, and accordingly, the present specification also provides a task scheduling apparatus, as shown in fig. 4. The task scheduling device includes:
an obtaining module 401, which obtains a flow to be processed;
an editing module 403, configured to describe the to-be-processed flow according to a preset description template, to obtain a flow description file, where the flow description file is a file describing the to-be-processed flow and tasks included in the to-be-processed flow;
the execution module 405 writes the process description file into a task queue, and executes the to-be-processed process.
Further, the describing the to-be-processed flow according to a preset description template to obtain a flow description file specifically includes:
and describing the process description information and the process section description information corresponding to the to-be-processed process and the task description information corresponding to the to-be-processed process according to the preset description template to obtain a process description file corresponding to the to-be-processed process, wherein the to-be-processed process is composed of at least one process section.
Further, the process description information and the process segment description information corresponding to the to-be-processed process include: the name of the flow and/or flow segment to be processed, the unique identifier of the flow segment corresponding to the flow to be processed, tasks included in the flow to be processed, the number of tasks included in the flow to be processed, the input and output of the flow and/or flow segment to be processed, the execution environment of the flow and/or flow segment to be processed, the priority of the flow and/or flow segment to be processed, and the dependency relationship between the flow and/or flow segment to be processed and other flows and/or flow segments;
the task description information corresponding to the flow to be processed comprises the name of the task included in the flow to be processed, the unique identification of the task, the input and output of the task, the execution environment of the task and the dependency relationship between the task and other tasks.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
judging whether the task included in the flow to be processed is a timing task, if the task included in the flow to be processed is the timing task, writing the flow description file and the timing type of the task included in the flow to be processed into the task queue, and executing the flow to be processed;
and if the task included in the flow to be processed is a non-timing task, directly writing the flow description file into the task queue, and executing the flow to be processed.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
reading the process description file, and writing the process description file into a task queue;
adopting DAG to analyze the flow description file in the task queue to generate flow DAG information;
performing flow packaging based on the flow DAG information to obtain a big data analysis flow;
and submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed.
Further, the process packaging is performed based on the process DAG information to obtain a big data analysis process, and the process specifically includes:
and based on the flow DAG information, performing flow sub-packaging according to the execution environment of the task included in the task description information in the flow description file to obtain a big data analysis flow.
Further, the submitting the big data analysis process to a corresponding execution platform, and executing the to-be-processed process specifically includes:
and submitting the big data analysis flow to a corresponding execution platform based on the execution environment of the task included in the task description information included in the big data analysis flow, and executing the flow to be processed.
An embodiment of the present specification further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a flow to be processed;
describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed;
and writing the flow description file into a task queue, and executing the flow to be processed.
Further, the describing the to-be-processed flow according to a preset description template to obtain a flow description file specifically includes:
and describing the process description information and the process section description information corresponding to the to-be-processed process and the task description information corresponding to the to-be-processed process according to the preset description template to obtain a process description file corresponding to the to-be-processed process, wherein the to-be-processed process is composed of at least one process section.
Further, the process description information and the process segment description information corresponding to the to-be-processed process include: the name of the flow and/or flow segment to be processed, the unique identifier of the flow segment corresponding to the flow to be processed, tasks included in the flow to be processed, the number of tasks included in the flow to be processed, the input and output of the flow and/or flow segment to be processed, the execution environment of the flow and/or flow segment to be processed, the priority of the flow and/or flow segment to be processed, and the dependency relationship between the flow and/or flow segment to be processed and other flows and/or flow segments;
the task description information corresponding to the flow to be processed comprises the name of the task included in the flow to be processed, the unique identification of the task, the input and output of the task, the execution environment of the task and the dependency relationship between the task and other tasks.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
judging whether the task included in the flow to be processed is a timing task, if the task included in the flow to be processed is the timing task, writing the flow description file and the timing type of the task included in the flow to be processed into the task queue, and executing the flow to be processed;
and if the task included in the flow to be processed is a non-timing task, directly writing the flow description file into the task queue, and executing the flow to be processed.
Further, the writing the process description file into a task queue and executing the to-be-processed process specifically includes:
reading the process description file, and writing the process description file into a task queue;
adopting DAG to analyze the flow description file in the task queue to generate flow DAG information;
performing flow packaging based on the flow DAG information to obtain a big data analysis flow;
and submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed.
Further, the process packaging is performed based on the process DAG information to obtain a big data analysis process, and the process specifically includes:
and based on the flow DAG information, performing flow sub-packaging according to the execution environment of the task included in the task description information in the flow description file to obtain a big data analysis flow.
Further, the submitting the big data analysis process to a corresponding execution platform, and executing the to-be-processed process specifically includes:
and submitting the big data analysis flow to a corresponding execution platform based on the execution environment of the task included in the task description information included in the big data analysis flow, and executing the flow to be processed.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method.
The apparatus, the electronic device, the nonvolatile computer storage medium and the method provided in the embodiments of the present description correspond to each other, and therefore, the apparatus, the electronic device, and the nonvolatile computer storage medium also have similar advantageous technical effects to the corresponding method.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data optimization apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data optimization apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data optimization apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data optimization apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A method for task scheduling, the method comprising:
acquiring a flow to be processed;
describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed;
and writing the flow description file into a task queue, and executing the flow to be processed.
2. The method according to claim 1, wherein the describing the flow to be processed according to a preset description template to obtain a flow description file specifically includes:
and describing the process description information and the process section description information corresponding to the to-be-processed process and the task description information corresponding to the to-be-processed process according to the preset description template to obtain a process description file corresponding to the to-be-processed process, wherein the to-be-processed process is composed of at least one process section.
3. The method of claim 2, wherein the process description information and the process segment description information corresponding to the to-be-processed process comprise: the name of the flow and/or flow segment to be processed, the unique identifier of the flow segment corresponding to the flow to be processed, tasks included in the flow to be processed, the number of tasks included in the flow to be processed, the input and output of the flow and/or flow segment to be processed, the execution environment of the flow and/or flow segment to be processed, the priority of the flow and/or flow segment to be processed, and the dependency relationship between the flow and/or flow segment to be processed and other flows and/or flow segments;
the task description information corresponding to the flow to be processed comprises the name of the task included in the flow to be processed, the unique identification of the task, the input and output of the task, the execution environment of the task and the dependency relationship between the task and other tasks.
4. The method of claim 1, wherein the writing the process description file into a task queue and executing the pending process specifically includes:
judging whether the task included in the flow to be processed is a timing task, if the task included in the flow to be processed is the timing task, writing the flow description file and the timing type of the task included in the flow to be processed into the task queue, and executing the flow to be processed;
and if the task included in the flow to be processed is a non-timing task, directly writing the flow description file into the task queue, and executing the flow to be processed.
5. The method of claim 1, wherein the writing the process description file into a task queue and executing the pending process specifically includes:
reading the process description file, and writing the process description file into a task queue;
adopting DAG to analyze the flow description file in the task queue to generate flow DAG information;
performing flow packaging based on the flow DAG information to obtain a big data analysis flow;
and submitting the big data analysis flow to a corresponding execution platform, and executing the flow to be processed.
6. The method of claim 5, wherein the process packaging is performed based on the process DAG information to obtain a big data analysis process, and specifically comprises:
and based on the flow DAG information, performing flow sub-packaging according to the execution environment of the task included in the task description information in the flow description file to obtain a big data analysis flow.
7. The method of claim 5, wherein the submitting the big data analysis flow to a corresponding execution platform and executing the flow to be processed specifically comprises:
and submitting the big data analysis flow to a corresponding execution platform based on the execution environment of the task included in the task description information included in the big data analysis flow, and executing the flow to be processed.
8. A task scheduling apparatus, characterized in that the apparatus comprises:
the acquisition module acquires a flow to be processed;
the editing module is used for describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is used for describing the flow to be processed and tasks included in the flow to be processed;
and the execution module writes the process description file into a task queue and executes the to-be-processed process.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a flow to be processed;
describing the flow to be processed according to a preset description template to obtain a flow description file, wherein the flow description file is a file for describing the flow to be processed and tasks included in the flow to be processed;
and writing the flow description file into a task queue, and executing the flow to be processed.
CN202110798469.4A 2021-07-15 2021-07-15 Task scheduling method, device and equipment Active CN113326117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110798469.4A CN113326117B (en) 2021-07-15 2021-07-15 Task scheduling method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110798469.4A CN113326117B (en) 2021-07-15 2021-07-15 Task scheduling method, device and equipment

Publications (2)

Publication Number Publication Date
CN113326117A true CN113326117A (en) 2021-08-31
CN113326117B CN113326117B (en) 2021-10-29

Family

ID=77426535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110798469.4A Active CN113326117B (en) 2021-07-15 2021-07-15 Task scheduling method, device and equipment

Country Status (1)

Country Link
CN (1) CN113326117B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679482A (en) * 2013-11-27 2015-06-03 北京拓尔思信息技术股份有限公司 OSGI (Open Service Gateway Initiative)-based ETL (Extraction-Transformation-Loading) processing device and method
CN109284324A (en) * 2018-10-16 2019-01-29 深圳中顺易金融服务有限公司 The dispatching device of flow tasks based on Apache Oozie frame processing big data
CN109684053A (en) * 2018-11-05 2019-04-26 广东岭南通股份有限公司 The method for scheduling task and system of big data
CN110888721A (en) * 2019-10-15 2020-03-17 平安科技(深圳)有限公司 Task scheduling method and related device
CN112035230A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Method and device for generating task scheduling file and storage medium
US20210011762A1 (en) * 2018-03-30 2021-01-14 Huawei Technologies Co., Ltd. Deep Learning Job Scheduling Method and System and Related Device
CN112596885A (en) * 2020-12-25 2021-04-02 网易(杭州)网络有限公司 Task scheduling method, device, equipment and storage medium
CN112613832A (en) * 2020-12-02 2021-04-06 南京南瑞信息通信科技有限公司 Lightweight workflow component based on finite-state machine and processing method thereof
CN112766907A (en) * 2021-01-20 2021-05-07 中国工商银行股份有限公司 Service data processing method and device and server
CN113032374A (en) * 2019-12-24 2021-06-25 北京数聚鑫云信息技术有限公司 Data processing method, device, medium and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679482A (en) * 2013-11-27 2015-06-03 北京拓尔思信息技术股份有限公司 OSGI (Open Service Gateway Initiative)-based ETL (Extraction-Transformation-Loading) processing device and method
US20210011762A1 (en) * 2018-03-30 2021-01-14 Huawei Technologies Co., Ltd. Deep Learning Job Scheduling Method and System and Related Device
CN109284324A (en) * 2018-10-16 2019-01-29 深圳中顺易金融服务有限公司 The dispatching device of flow tasks based on Apache Oozie frame processing big data
CN109684053A (en) * 2018-11-05 2019-04-26 广东岭南通股份有限公司 The method for scheduling task and system of big data
CN110888721A (en) * 2019-10-15 2020-03-17 平安科技(深圳)有限公司 Task scheduling method and related device
CN113032374A (en) * 2019-12-24 2021-06-25 北京数聚鑫云信息技术有限公司 Data processing method, device, medium and equipment
CN112035230A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Method and device for generating task scheduling file and storage medium
CN112613832A (en) * 2020-12-02 2021-04-06 南京南瑞信息通信科技有限公司 Lightweight workflow component based on finite-state machine and processing method thereof
CN112596885A (en) * 2020-12-25 2021-04-02 网易(杭州)网络有限公司 Task scheduling method, device, equipment and storage medium
CN112766907A (en) * 2021-01-20 2021-05-07 中国工商银行股份有限公司 Service data processing method and device and server

Also Published As

Publication number Publication date
CN113326117B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN107450972B (en) Scheduling method and device and electronic equipment
US8893118B2 (en) Migratable unit based application migration
US8813035B2 (en) Paradigm for concurrency testcase generation
CN110471754B (en) Data display method, device, equipment and storage medium in job scheduling
CN103942099B (en) Executing tasks parallelly method and device based on Hive
US9983979B1 (en) Optimized dynamic matrixing of software environments for application test and analysis
US20220164222A1 (en) Execution of Services Concurrently
CN110401700A (en) Model loading method and system, control node and execution node
CN109947643B (en) A/B test-based experimental scheme configuration method, device and equipment
Ibryam et al. Kubernetes Patterns
CN108984652A (en) A kind of configurable data cleaning system and method
CN112748993A (en) Task execution method and device, storage medium and electronic equipment
CN110532044A (en) A kind of big data batch processing method, device, electronic equipment and storage medium
CN109144511B (en) Method and system for automatically generating numerical simulation graphical user interface
US20170147398A1 (en) Estimating job start times on workload management systems
CN110046100B (en) Packet testing method, electronic device and medium
Poquet Simulation approach for resource management
US20170094020A1 (en) Processing requests for multi-versioned service
Ibryam et al. Kubernetes patterns
CN113326117B (en) Task scheduling method, device and equipment
US20150293529A1 (en) Method and system for controlling a manufacturing plant with a manufacturing execution system
US20140310070A1 (en) Coordinated business rules management and mixed integer programming
CN111881025B (en) Automatic test task scheduling method, device and system
CN114924857A (en) Redis-based distributed timing scheduling method and device and storage medium
US20170090984A1 (en) Progress visualization of computational job

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant