CN111324434B - Configuration method, device and execution system of computing task - Google Patents

Configuration method, device and execution system of computing task Download PDF

Info

Publication number
CN111324434B
CN111324434B CN202010079567.8A CN202010079567A CN111324434B CN 111324434 B CN111324434 B CN 111324434B CN 202010079567 A CN202010079567 A CN 202010079567A CN 111324434 B CN111324434 B CN 111324434B
Authority
CN
China
Prior art keywords
task
computation
logic
logic module
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010079567.8A
Other languages
Chinese (zh)
Other versions
CN111324434A (en
Inventor
李岱骏
王增彦
柳彦召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010079567.8A priority Critical patent/CN111324434B/en
Publication of CN111324434A publication Critical patent/CN111324434A/en
Application granted granted Critical
Publication of CN111324434B publication Critical patent/CN111324434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Advance Control (AREA)

Abstract

The embodiment of the specification provides a configuration method, a configuration device and an execution system of a computing task. In the method, in response to receiving the second computing task, in a case where a data source table of the second computing task is used by the first computing task, the first computing task and the second computing task may be subjected to task merging processing. When the first source data is different from the second source data, adding the output configuration of the first analysis logic module; adding an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task; creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different.

Description

Configuration method, device and execution system of computing task
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a configuration method, a configuration device and an execution system of a computing task.
Background
One computer or one computer cluster can run a plurality of computing tasks, and each computing task processes data correspondingly according to preset logic to obtain a computing result. A computational task includes programs, data, and related control commands. For each computing task, the computer starts a thread to run the computing task, each time the computer starts a thread, the computer needs to support the thread by computer resources such as processor resources and memory resources, and each thread needs to occupy certain resources, especially real-time computing tasks. It is a daily operation state for a computer to simultaneously run hundreds of computing tasks, which causes a large amount of computer resources to be occupied, and the operation and maintenance costs of the computer are increased. Therefore, how to merge the computation tasks meeting the merging condition is an urgent problem to be solved.
Disclosure of Invention
In view of the above, the embodiments of the present specification provide a configuration method, an apparatus, and an execution system for a computing task. In the method, in response to receiving the second computing task, in a case where a data source table of the second computing task is used by the first computing task, the first computing task and the second computing task may be subjected to task merging processing. Specifically, when the first source data is different from the second source data, adding an output configuration of the first parsing logic module; adding an input/output configuration of the first computational logic module when the computational logic of the second computational task is the same as the computational logic of the first computational task; creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different. By the method, the first calculation task and the second calculation task which use the same data source table are combined, at least the first analysis logic can be multiplexed, the first calculation logic can be multiplexed under the condition that the calculation logics are the same, the second analysis logic and the second calculation logic are prevented from being re-created aiming at the second calculation task, and further computer resources are saved.
According to an aspect of embodiments of the present specification, there is provided a method for configuring a computing task, comprising: in response to receiving a second computing task, determining a data source table and second source data for the second computing task; checking whether a data source table of the second computing task is used by a first computing task currently running, the first computing task being run using a first pipeline, the first pipeline including a first parsing logic module and a first computing logic module; when the data source table of the second computing task is used by the first computing task and the second source data is different from the first source data of the first computing task, adding an output configuration of the first parsing logic module, wherein the added output end is used for outputting a data parsing result aiming at the second source data; adding an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task, the added input configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output for outputting the computation result for the second source data; and creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different, the second pipeline comprising a second computational logic module, an input of the second computational logic module configured to receive a data parsing result for the second source data from the first pipeline.
Optionally, in an example of the above aspect, further comprising: adding an input configuration of the first parsing logic module, the added input for receiving the second source data.
Optionally, in an example of the above aspect, further comprising: creating a second pipeline when a data source table of the second computing task is used by the first computing task, the second source data is the same as the first source data of the first computing task, and the computational logic of the second computing task is different from the computational logic of the first computing task, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parse result output by the first parsing logic from the first pipeline.
Optionally, in one example of the above aspect, the computing logic includes a sequence of sequentially executed sub-computing logics, wherein when the sequence of sub-computing logics of the second computing task is the same as the sequence of sub-computing logics of the first computing task with partial computing sub-logics present since a starting sub-computing logic, the second computing logic in the created second pipeline includes a sequence of second sub-computing logic modules, each sub-computing logic module in the sequence of second sub-computing logic modules corresponding to each sub-computing logic from a different computing sub-logic, an input of the starting sub-computing logic module of the sequence of second computing logic modules being configured to receive an output of a last identical computing sub-logic module of the first computing logic module.
Optionally, in one example of the above aspect, the sequence of sub-computation logics includes a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
Optionally, in an example of the above aspect, further comprising: creating a second pipeline comprising a second parsing logic module and a second computation logic module when a data source table of the second computation task is not used by the first computation task.
Optionally, in an example of the above aspect, the second computing task is a computing task defined based on an SQL statement; in response to receiving a second computing task, determining a data source table and a second source data for the second computing task comprises: and analyzing the SQL statement to determine a data source table and second source data of the second computing task.
Optionally, in one example of the above aspect, the first computing task and the second computing task are feature computing tasks based on a sliding window algorithm.
According to another aspect of embodiments herein, there is also provided an apparatus for configuring a computing task, including: the source data determining unit is used for responding to the receiving of a second computing task and determining a data source table and second source data of the second computing task; the data source table checking unit checks whether a data source table of the second computing task is used by a first computing task which is currently running, wherein the first computing task is run by using a first pipeline, and the first pipeline comprises a first resolution logic module and a first computing logic module; when the data source table of the second computing task is used by the first computing task and the second source data is different from the first source data of the first computing task, the parsing logic module configuration unit adds the output configuration of the first parsing logic module, and the added output end is used for outputting a data parsing result aiming at the second source data; a computation logic module configuration unit adds an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task, the added input terminal is configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output terminal is used for outputting the computation result for the second source data; and a pipeline creation unit to create a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parsing result for the second source data from the first pipeline.
Optionally, in an example of the above aspect, the parsing logic module configuring unit: adding an input configuration of the first parsing logic module, the added input for receiving the second source data.
Optionally, in one example of the above aspect, the pipeline creation unit: creating a second pipeline when a data source table of the second computing task is used by the first computing task, the second source data is the same as the first source data of the first computing task, and the computational logic of the second computing task is different from the computational logic of the first computing task, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parse result output by the first parsing logic from the first pipeline.
Optionally, in one example of the above aspect, the computing logic includes a sequence of sequentially executed sub-computing logics, wherein when the sequence of sub-computing logics of the second computing task is the same as the sequence of sub-computing logics of the first computing task with partial computing sub-logics present since a starting sub-computing logic, the second computing logic in the created second pipeline includes a sequence of second sub-computing logic modules, each sub-computing logic module in the sequence of second sub-computing logic modules corresponding to each sub-computing logic from a different computing sub-logic, an input of the starting sub-computing logic module of the sequence of second computing logic modules being configured to receive an output of a last identical computing sub-logic module of the first computing logic module.
Optionally, in one example of the above aspect, the sequence of sub-computation logics includes a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
Optionally, in one example of the above aspect, the pipeline creation unit: creating a second pipeline comprising a second parsing logic module and a second computation logic module when a data source table of the second computation task is not used by the first computation task.
Optionally, in an example of the above aspect, the second computing task is a computing task defined based on an SQL statement; the source data determination unit: and analyzing the SQL statement to determine a data source table and second source data of the second computing task.
There is also provided, in accordance with another aspect of an embodiment of the present specification, a system for performing computing tasks, including: the system comprises a task configuration platform, a task configuration framework and a task execution platform; the task configuration platform executes any one of the above methods for configuring a computing task to configure a first computing task and a second computing task in the task configuration framework; and the task execution platform responds to the completion of configuration of the task configuration platform, sends the acquired first source data of the first computing task and the acquired second source data of the second computing task to the task configuration framework to execute the first computing task and the second computing task, and receives an execution result fed back by the task configuration framework.
According to another aspect of embodiments of the present specification, there is also provided a computing device including: at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform a method for configuring a computing task as described above.
According to another aspect of embodiments herein, there is also provided a machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform a method for configuring a computing task as described above.
Drawings
A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals. The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the detailed description serve to explain the embodiments of the invention. In the drawings:
FIG. 1 illustrates a flow diagram of a method for configuring computing tasks of an embodiment of the present description;
FIG. 2 illustrates a schematic diagram of one example of a second computing task defined based on an SQL statement of embodiments of the specification;
FIG. 3 is a diagram illustrating an example of a metadata structure parsed from an SQL statement of an embodiment of the specification;
FIG. 4 illustrates a schematic diagram of one example of a first pipeline including a first parsing logic module and a first computation logic module of embodiments of the present description;
FIG. 5 illustrates a schematic diagram of one example of a second pipeline including a second parsing logic module and a second computation logic module of embodiments of the present description;
FIG. 6 is a diagram illustrating one example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 7 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 8 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 9 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 10 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 11 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 12 is a diagram illustrating another example of a merged configuration of two computing tasks of an embodiment of the present description;
FIG. 13 illustrates a block diagram of an apparatus for configuring computing tasks of an embodiment of the specification;
FIG. 14 illustrates a block diagram of a system for performing computing tasks according to embodiments of the present description; and
FIG. 15 illustrates a block diagram of a computing device for implementing a configuration method for computing tasks, according to an embodiment of the present description.
Detailed Description
The subject matter described herein will be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand and thereby implement the subject matter described herein, and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. In addition, features described with respect to some examples may also be combined in other examples.
As used herein, the term "include" and its variants mean open-ended terms in the sense of "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment". The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
As used herein, the term "couple" refers to a direct mechanical, communication, or electrical connection between two components, or an indirect mechanical, communication, or electrical connection through an intermediate component. The term "electrically connected" means that electrical communication can be made between two components for data/information exchange. Likewise, the electrical connection may refer to a direct electrical connection between two components, or an indirect electrical connection through an intermediate component. The electrical connection may be made in a wired manner or a wireless manner.
One computer or one computer cluster can run a plurality of computing tasks, and each computing task processes data correspondingly according to preset logic to obtain a computing result. A computational task includes programs, data, and related control commands, etc. For each computing task, the computer starts a thread to run the computing task, each time the computer starts a thread, the computer needs to support the thread by computer resources such as processor resources and memory resources, and each thread needs to occupy certain resources, especially real-time computing tasks. It is a daily operation state for a computer to simultaneously run hundreds of computing tasks, which causes a large amount of computer resources to be occupied, and the operation and maintenance costs of the computer are increased. Therefore, how to merge the computation tasks meeting the merging condition is an urgent problem to be solved.
In order to solve the above problem, embodiments of the present specification provide a configuration method, an apparatus, and an execution system for a computing task. In the method, in response to receiving the second computing task, in a case where a data source table of the second computing task is used by the first computing task, the first computing task and the second computing task may be subjected to task merging processing. Specifically, when the first source data is different from the second source data, adding an output configuration of the first parsing logic module; adding an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task; creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different. By the method, the first calculation task and the second calculation task which use the same data source table are combined, at least the first analysis logic can be multiplexed, the first calculation logic can be multiplexed under the condition that the calculation logics are the same, the second analysis logic and the second calculation logic are prevented from being re-created aiming at the second calculation task, and further computer resources are saved.
The following describes a configuration method, an apparatus and an execution system of a computing task according to an embodiment of the present specification in detail with reference to the accompanying drawings.
FIG. 1 illustrates a flow diagram of a method for configuring a computing task of an embodiment of the present description.
As shown in FIG. 1, at block 110, in response to receiving a second computing task, a data source table and second source data for the second computing task are determined.
In this specification embodiment, the second computing task is an unconfigured to-be-executed computing task when the second computing task is received. The manner of receiving the second computing task may be that the user submits the second computing task to the computing engine, and the computing engine receives the second computing task and then responds to the second computing task.
In this embodiment of the present specification, the second source data is service data of a second computation task, the second computation task performs corresponding computation with respect to the second source data, the data source table is a data source of the second computation task, and the second source data is stored in the data source table. Second source data is retrieved from the data source table for processing when the second computing task is performed. The second source data may include all source data of the data source table, and may include only a part of the source data in the data source table.
In one example, the second computing task may be characterized in the form of a metadata structure, the data source table and the second source data of the second computing task being determined based on the metadata structure information.
The metadata structure information is information about organization, data fields and relationship of source data of the computing task, for example, the metadata structure information may include a name of a data source table and field information of fields included in the data source table, the data source table of the second computing task may be determined based on the name of the data source table, and data in the fields corresponding to the field information is the second source data.
For another example, if the name of the data source table in the metadata structure information of the second calculation task is "student basic information", and the field information includes the name and the class, the data in the two fields of the name and the class in the data source table is the second source data.
In one example, the data source table for the second computing task may be stored in a database, which may also store other data source tables. After the data source table and the second source data of the second computing task are determined, the data source table may be obtained from the database, and the second source data may be further obtained from the data source table. The source table name of the data source table of the second computing task, the data type, the type of the data source and other related information can be stored in the metadata base.
In one example, the second computing task may be a computing task defined based on an SQL (Structured Query Language) statement. In this example, the user may submit an SQL statement for defining the second computing task to the computing engine, which receives the SQL statement, i.e., receives the second computing task.
FIG. 2 illustrates a schematic diagram of an example of a second computing task defined based on an SQL statement of embodiments of the present specification. As shown in fig. 2, the data source table of the second computing task is: t _ dwd _ evt _ ant _ biz _ offline _ trd _ ri, the metadata of the second computing task including: userId, user _ pay _ day _07d, gmtBizPay, SELECT user Id, and GROUP BY hop (to _ time tamp (gmt BizPay), INTERVAL '7' MINUTE, INTERVAL '7' DAY), userId, and the like.
Upon receiving the SQL statement, the SQL statement may be parsed into a form of a metadata structure to determine a data source table and second source data for the second computing task. In one example, the SQL statement may be first expressed in a syntax tree form, then the syntax tree is converted into a relational algebra expression, then the converted relational algebra expression is converted into a physical query plan, and an order of execution of operations, an algorithm used in each step, a transfer manner between operations, and the like are specified. For example, the SQL statement parsing method may adopt any one of call and Presto.
Taking fig. 3 as an example, fig. 3 is a schematic diagram illustrating an example of a metadata structure obtained by parsing an SQL statement according to an embodiment of the present specification. Analyzing the SQL statement shown in fig. 2 to obtain a metadata structure of the SQL statement, as shown in fig. 3, where the metadata structure includes: target table, data source table, field type, value of field, and field index, etc. The fields obtained by the analysis comprise: userId, user _ pay _ day _07d, and gmtBizPay, etc., the target tables are: user _ pay _ info, the data source table is: t _ dwd _ evt _ ant _ biz _ offline _ trd _ ri. From the resulting metadata structure, a data source table and a second source data may be determined.
At block 120, it is checked whether the data source table of the second computing task is used by the currently running first computing task. If so, flow proceeds to block 130; if not, flow proceeds to block 190.
In embodiments of the present description, a first pipeline may be used to run a first computing task, and the first pipeline may include a first parsing logic module and a first computing logic module.
The first parsing logic module may be configured to parse the first source data of the first computing task, where the parsing may include data format unification processing, redundancy deletion processing, and the like. For example, the data format unification process may unify numbers in the source data into a single precision floating point type, and the redundancy deletion process may delete redundant spaces. The first calculation logic module is used for performing logic calculation on the analyzed first source data, and the output calculation result is the calculation result of the first calculation task.
FIG. 4 illustrates a schematic diagram of one example of a first pipeline including a first parsing logic module and a first computation logic module of embodiments of the present description. As shown in fig. 4, the first parsing logic module is configured with an input and an output, the first calculation logic module is configured with an input and an output, and the output of the first parsing logic module is connected to the input of the first calculation logic module. The direction of the arrow represents a direction of transfer of the first source data of the first computing task in the first pipeline.
The first source data are input to the first analysis logic module, the first analysis logic module analyzes the input first source data and outputs the analyzed first source data to the first calculation logic module, and the first calculation logic module performs logic calculation on the analyzed first source data and outputs a calculation result aiming at the first calculation task.
In one example, a task association table, which is a metadata table used for recording the correspondence between the data source table in the metadata database and the associated computing task, may be further included in the metadata database storing the metadata of the data source table. When a data source table in the metadata base is associated with a calculation task, a record is added in the task association table to represent the corresponding relation between the data source table and the calculation task.
In one example, the task association table may record all data source tables in the metadata base, and if the data source table is not associated with a computing task, the data source table does not have a corresponding computing task in the task association table. In this example, when checking whether the data source table of the second computing task is used by the first computing task that is currently running, a record of the data source table may be looked up in the task association table, and it may be checked whether the data source table in the record corresponds to a computing task. If there is no corresponding computing task, it may be determined that the data source table of the second computing task is not used by the first computing task, and if there is a corresponding computing task, the computing task is the first computing task that is currently running.
In another example, the task association table may record only the data source table currently associated with the computing task and record the corresponding associated computing task of the data source table accordingly. In this example, when checking whether the data source table of the second computing task is used by the first computing task currently running, it may be looked up in the task association table whether the data source table is recorded, if not, it may be determined that the data source table of the second computing task is not used by the first computing task, and if so, the computing task corresponding to the data source table in the record is the first computing task.
In one example, at block 190, a second pipeline is created.
In this example, the data source table of the second computing task is not used by the currently running first computing task, indicating that the second computing task cannot perform task merging processing with the first computing task, requiring the creation of a second pipeline. The created second pipeline may include a second parsing logic module and a second computation logic module to perform a second computation task.
The second analysis logic module is used for analyzing second source data of the second calculation task, and the second calculation logic module is used for performing logic calculation on a data analysis result output by the second analysis logic module to obtain a calculation result of the second calculation task.
FIG. 5 illustrates a schematic diagram of one example of a second pipeline including a second parsing logic module and a second computation logic module of embodiments of the present description. As shown in fig. 5, the second parsing logic module is configured with an input and an output, the second calculation logic module is configured with an input and an output, the output of the second parsing logic module is connected to the input of the second calculation logic module, and the arrow is used to indicate the transmission direction of the second source data of the second calculation task in the second pipeline.
At block 130, it is checked whether the second source data is different from the first source data of the first computing task. If so, flow proceeds to block 140; if not, flow proceeds to block 150.
The second source data is the same as the first source data and indicates that the second calculation task is the same as the business data required to be processed by the first calculation task. The second source data is different from the first source data and indicates that the second computing task is different from the first computing task in business data required to be processed.
In one example, at block 150, it is checked whether the computational logic of the second computational task and the computational logic of the first computational task are the same. If not, flow proceeds to block 190. If so, it may be determined that the first computing task and the second computing task are the same computing task.
In this example, at block 190, a second pipeline may be created. The created second pipeline includes a second computational logic module, an input of which is configured to receive the data parsing result output by the first parsing logic module from the first pipeline. And the second calculation logic module outputs the calculation result of the second calculation task.
Fig. 6 is a diagram showing an example of a configuration in which two calculation tasks are merged according to an embodiment of the present specification. As shown in fig. 6, the first resolution logic block is configured with an input and an output, and the input of the first computation logic block and the input of the second computation logic block are both connected to the output of the first resolution logic block.
In a first pipeline, a first analysis logic module analyzes first source data to obtain a data analysis result, copies one part of the data analysis result, outputs one part of the data analysis result to a first calculation logic module, and outputs the other part of the data analysis result to a second calculation logic module. The first calculation logic module performs logic calculation on a data analysis result aiming at the first source data and outputs a calculation result of a first calculation task; and the second calculation logic module performs logic calculation on the analysis result aiming at the first source data and outputs a calculation result of the second calculation task.
At block 140, the output configuration of the first parsing logic module is added.
The added output end is used for outputting a data analysis result aiming at the second source data, and the originally configured output end of the first analysis logic module is used for outputting the data analysis result aiming at the first source data.
It should be noted that the execution order of the blocks 130 and 140 is not limited, the blocks 130 and 140 may be executed first, or the blocks 140 and 130 may be executed first. When block 140 is performed before block 130 is performed, flow proceeds to block 170 if it is checked that the second source data is different from the first source data.
In one example, an input configuration of the first parsing logic module may also be added, the added input for receiving the second source data, the originally configured input of the first parsing logic module for receiving the first source data.
Fig. 7 is a diagram showing another example of a configuration in which two calculation tasks are merged according to the embodiment of the present specification. As shown in fig. 7, the first parsing logic module is configured with two input terminals and two output terminals, the two input terminals are respectively used for receiving the first source data and the second source data, and correspondingly, the two output terminals are respectively used for outputting the parsed first source data and the parsed second source data.
In another example, the input configuration of the first parsing logic module may be kept unchanged, and only the output configuration may be increased. In this example, an input of the first parsing logic module is to receive first source data and second source data.
Fig. 8 is a schematic diagram showing another example of a merged configuration of two computing tasks of the embodiment of the present specification. As shown in fig. 8, the first parsing logic module is configured with an input end and two output ends, wherein the input end is used for receiving the first source data and the second source data, and the two output ends are respectively used for outputting the parsed first source data and the parsed second source data.
In this example, the first source data and the second source data may be distinguished in a marking manner, and after the first source data and the second source data are analyzed by the first analysis logic module, the first source data and the second source data may be identified by a mark, the identified first source data subjected to analysis processing is output through one of the output terminals, and the identified second source data subjected to analysis processing is output through the other output terminal.
At block 170, it is checked whether the computational logic of the second computational task and the computational logic of the first computational task are the same. If so, flow proceeds to block 180; if not, flow proceeds to block 190.
In embodiments of the present description, the computation logic may include a plurality of sub-computation logics, and when there is at least one sub-computation logic different, the computation logic of the second computation task is different from the computation logic of the first computation task, and when all the sub-computation logics are the same, the computation logic of the second computation task is the same as the computation logic of the first computation task. In one example, the operations of block 170 may be the same as the operations of block 150 described above.
At block 180, the input/output configuration of the first computational logic module is added.
In the embodiments of the present specification, the input/output configuration means an input configuration and an output configuration. The added input is configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output is for outputting a calculation result for the second source data.
Taking fig. 7 as an example, before adding the input/output configuration, the first computation logic module is configured as one input (hereinafter referred to as input a) for receiving the data parsing result for the first source data from the first parsing logic module and one output (hereinafter referred to as output a) for outputting the computation result for the first source data. After adding the input/output configuration, the first computation logic module adds an input (hereinafter referred to as input B) for receiving a data parsing result for the second source data from the first parsing logic module and an output (hereinafter referred to as output B) for outputting a computation result for the second source data.
At block 190, a second pipeline is created.
The created second pipeline may include a second computational logic module, an input of which is configured to receive data parsing results for second source data from the first pipeline. In one example, an input of the second computation logic module receives a data parsing result for the second source data from the first parsing logic module.
The output of the second computation logic module is the computation result of the second computation task. The input of the first computation logic module maintains the data parsing result for the first source data received from the first parsing logic module, and the output of the first computation logic module is the computation result of the first computation task.
In one example, fig. 9 shows a schematic diagram of another example of a configuration in which two computing tasks are merged according to an embodiment of the present specification. As shown in fig. 9, the first resolution logic block is configured with two inputs and two outputs, one of which is connected to the input of the first computation logic block and the other of which is connected to the input of the second computation logic block.
The first analysis logic module is used for receiving a data analysis result aiming at the first source data and a data analysis result aiming at the second source data, and the first calculation logic module is used for receiving the data analysis result aiming at the first source data, carrying out corresponding logic calculation and outputting a calculation result of the first calculation task. And the second calculation logic module receives the data analysis result aiming at the second source data, performs corresponding logic calculation and outputs the calculation result of the second calculation task.
In another example, fig. 10 is a schematic diagram illustrating another example of a configuration in which two calculation tasks are merged according to an embodiment of the present specification. As shown in fig. 10, the first resolution logic block is configured with one input and two outputs, one of which is connected to the input of the first computation logic block and the other of which is connected to the input of the second computation logic block.
One input end of the first analysis logic module is used for receiving first source data and second source data, two output ends of the first analysis logic module are respectively used for outputting a data analysis result aiming at the first source data and a data analysis result aiming at the second source data, and the first calculation logic module is used for receiving the data analysis result aiming at the first source data, carrying out corresponding logic calculation and outputting a calculation result of the first calculation task. And the second calculation logic module receives a data analysis result aiming at the second source data, performs corresponding logic calculation and outputs a calculation result of the second calculation task.
In one example, the computational logic may include a sequence of sequentially executed sub-computational logics, wherein, when the sequence of sub-computational logics of the second computational task is the same as the sequence of sub-computational logics of the first computational task in which there is a partial computational sub-logic from a starting sub-computational logic, the second computational logic module in the created second pipeline includes a sequence of second sub-computational logic modules, each sub-computational logic module in the sequence of second word computational logic modules corresponding to a respective sub-computational logic from a different computational sub-logic, an input of the starting sub-computational logic module of the sequence of second computational logic modules configured to receive an output of a last identical computational sub-logic module of the first computational logic module.
In one example, the sequence of sub-computation logics includes a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
For example, fig. 11 shows a schematic diagram of another example of a merged configuration of two computing tasks of the embodiment of the present specification. As shown in fig. 11, the filtering sub-logic, the grouping sub-logic, and the aggregation function sub-logic are sequentially executed based on the sub-computation logic sequence, the first computation logic module includes a filtering sub-logic module, a first grouping sub-logic module, and a first aggregation function sub-logic module, an input of the filtering sub-logic module is an input of the first computation logic module, an output of the filtering sub-logic module is connected to an input of the first grouping sub-logic module, an output of the first grouping sub-logic module is connected to an input of the first aggregation function sub-logic module, and an output of the first aggregation function sub-logic module is an output of the first computation logic module.
The sub-computation logic sequence of the second computation task comprises a filtering sub-logic, a second grouping sub-logic and a second aggregation function sub-logic, wherein the filtering sub-logic of the second computation task is the same as the first computation task, and the second sub-computation logic module sequence comprises a second grouping sub-logic module different from the first grouping sub-logic module and a second aggregation function sub-logic module different from the first aggregation function sub-logic module. The initial sub-computation logic module of the second computation logic module sequence is a second sub-grouping logic module, the input of the second sub-grouping logic module is the input of the second computation logic module, and the input of the second computation logic module is connected with the output of the filtering sub-logic module. The input of the second grouping sub-logic module is configured to receive the output of the filtering sub-logic module of the first computation logic module.
For another example, fig. 12 is a schematic diagram showing another example of a configuration in which two calculation tasks are merged according to the embodiment of the present specification. As shown in fig. 12, the first computation logic module includes a filtering sub-logic module, a grouping sub-logic module, and a first aggregation function sub-logic module, and the sub-computation logic sequence of the second computation task includes a filtering sub-logic, a grouping sub-logic, and a second aggregation function sub-logic, and then the second sub-computation logic module sequence includes only a second aggregation function sub-logic module different from the first aggregation function sub-logic module.
The input of the second aggregation function sub-logic module is the input of the second calculation logic module, and the input of the second calculation logic module is connected with the output of the grouping sub-logic module. The input of the second aggregation function sub-logic module is configured to receive the output of the grouping sub-logic module of the first computation logic module.
In one example, the first computing task and the second computing task are feature computing tasks based on a sliding window algorithm.
In this example, the sliding window is a window that can slide at a specified sliding interval, and the window can divide data into a segment of a data set, and perform calculation on the data set based on the divided data set. In one example, for streaming data, a window may partition a range of data sets according to time. For example, when the click rate of the product is counted for 5 minutes, the time range in which the window is divided is 5 minutes.
In this example, a feature computation task is a task that computes for one or more features, and the computation task is different for different features. For example, if the calculation task is to count the number of clicks of a product, and the 4 feature calculation tasks are to count the number of clicks for 5 minutes, 10 minutes, 1 hour, and 1 day, respectively, the 5 minutes, 10 minutes, 1 hour, and 1 day here each represent a feature for each feature calculation task.
For example, if the first calculation task is to count the click rate of a commodity for 1 hour, and the second calculation task is to count the click rate of the commodity for 5 minutes, the data source tables used by the first calculation task and the second calculation task are the same, and the first calculation task and the second calculation task can be merged.
By the example, after the first computing task and the second computing task with different characteristics are subjected to task merging processing, the first computing task and the second computing task can be executed only by creating one window in the task running stage, and computer resources are saved.
Fig. 13 shows a block diagram of an apparatus for configuring a computing task (hereinafter referred to as a computing task configuring apparatus 1300) according to an embodiment of the present specification. As shown in fig. 13, the calculation task configuration device 1300 includes a source data determination unit 1310, a data source table check unit 1320, a parsing logic module configuration unit 1330, a calculation logic module configuration unit 1340, and a pipeline creation unit 1350.
The source data determination unit 1310 determines, in response to receiving the second computing task, a data source table and second source data for the second computing task. In one example of the present specification, the second computing task is a computing task defined based on an SQL statement; the source data determination unit 1310 parses the SQL statement to determine a data source table and second source data for the second computing task.
The data source table checking unit 1320 checks whether a data source table of the second calculation task is used by a first calculation task currently running, the first calculation task being run using a first pipeline including a first parsing logic module and a first calculation logic module.
When the data source table of the second computing task is used by the first computing task and the second source data is different from the first source data of the first computing task, the parsing logic module configuration unit 1330 adds an output configuration of the first parsing logic module, where the added output end is used to output a data parsing result for the second source data. In one example of the present description, the parsing logic module configuration unit 1330 may add an input configuration of a first parsing logic module, the added input for receiving second source data.
The computation logic module configuration unit 1340 adds an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task, the added input is configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output is used to output the computation result for the second source data.
The pipeline creation unit 1350 creates a second pipeline including a second computation logic module whose input is configured to receive a data parsing result for the second source data from the first pipeline when the computation logic of the second computation task is different from the computation logic of the first computation task.
In one example of this specification, the pipeline creation unit 1350 creates a second pipeline when a data source table of a second computing task is used by the first computing task, the second source data is the same as the first source data of the first computing task, and the computational logic of the second computing task is different from the computational logic of the first computing task, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive the data parse result output by the first parsing logic from the first pipeline.
In one example of this specification, the computation logic comprises a sequence of sequentially executed sub-computation logics, wherein, when the sequence of sub-computation logics of the second computation task is the same as the sequence of sub-computation logics of the first computation task with partial computation sub-logics present since the start sub-computation logic, the second computation logic module in the created second pipeline comprises a sequence of second sub-computation logic modules, each sub-computation logic module in the sequence of second sub-computation logic modules corresponding to a respective sub-computation logic from a different computation sub-logic, and an input of the start sub-computation logic module of the sequence of second computation logic modules is configured to receive an output of a last identical computation sub-logic module of the first computation logic module.
In one example of the present specification, the sequence of sub-computation logics includes a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
In one example of this specification, the pipeline creation unit 1350 creates a second pipeline including a second parsing logic module and a second computation logic module when the data source table of the second computation task is not used by the first computation task.
Fig. 14 shows a block diagram of a system for performing computing tasks (hereinafter referred to as a computing task execution system 1400) according to an embodiment of the present description.
In one example, the computing task execution system 1400 may be a computing engine that may configure and execute computing tasks according to the configuration, outputting computing results.
As shown in fig. 14, the computing task performing system 1400 includes: task configuration platform 1410, task configuration framework 1420, and task execution platform 1430.
Task configuration platform 1410 is used to configure the received computing task. In one example, the task configuration platform 1410 may create a pipeline for the received computing task, the created pipeline is used to execute the computing task, and the execution of the pipeline is isolated from the execution of other computing tasks.
In another example, the task configuration platform 1410 may perform merging processing on at least two computing tasks that satisfy the merging condition, so as to reduce computer resources occupied by executing the computing tasks, and reduce load pressure of the processor, thereby improving the execution efficiency of the computing tasks. The operations to perform the merge process may refer to the related operations described above with reference to fig. 1-13.
The task configuration platform 1410 may configure the first computing task and the second computing task in a task configuration framework 1420. After configuration is complete, the task configuration framework 1420 may be used to perform a first computing task and a second computing task based on a pipeline of configured parsing logic modules and computing logic modules.
The task execution platform 1430, in response to the task configuration platform 1410 completing the configuration, sends the acquired first source data of the first computing task and the acquired second source data of the second computing task to the task configuration framework 1420 to execute the first computing task and the second computing task, and receives an execution result fed back by the task configuration framework 1420.
In one example, task configuration platform 1410, upon completing the configuration, may notify task execution platform 1430 to cause task execution platform 1430 to begin executing the configured first and second computing tasks. In another example, upon completion of configuration by task configuration platform 1410, task configuration framework 1420 may notify task execution platform 1430 to cause task execution platform 1430 to begin executing the configured first and second computing tasks.
After receiving the first source data and the second source data, the task configuration framework 1420 processes the first source data and the second source data according to the configured pipeline for the first computing task and the second computing task, and obtains a first computing result for the first computing task and a second computing result for the second computing task. The first calculation result and the second calculation result are execution results.
Embodiments of a configuration method, a configuration device and an execution system for a computing task according to the embodiments of the present disclosure are described above with reference to fig. 1 to 14.
The configuration device and the execution system of the calculation task of the embodiment of the present specification may be implemented by hardware, or may be implemented by software, or a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the storage into the memory for operation through the processor of the device where the software implementation is located as a logical means. In the embodiments of the present specification, the configuration apparatus and the execution system of the calculation task may be implemented by, for example, a computing device.
FIG. 15 illustrates a block diagram of a computing device for implementing a configuration method for computing tasks, according to an embodiment of the present description.
As shown in fig. 15, computing device 1500 can include at least one processor 1510, storage (e.g., non-volatile storage) 1520, memory 1530, and a communication interface 1540, and the at least one processor 1510, storage 1520, memory 1530, and communication interface 1540 are coupled together via a bus 1550. The at least one processor 1510 executes at least one computer-readable instruction (i.e., the elements described above as being implemented in software) stored or encoded in memory.
In one embodiment, computer-executable instructions are stored in the memory that, when executed, cause the at least one processor 1510 to: in response to receiving the second computing task, determining a data source table and second source data for the second computing task; checking whether a data source table of a second computing task is used by a first computing task currently running, wherein the first computing task is run by using a first pipeline, and the first pipeline comprises a first analysis logic module and a first computing logic module; when the data source table of the second computing task is used by the first computing task and the second source data is different from the first source data of the first computing task, adding the output configuration of the first parsing logic module, wherein the added output end is used for outputting a data parsing result aiming at the second source data; adding an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task, the added input configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output for outputting the computation result for the second source data; and creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different, the second pipeline comprising a second computational logic module, an input of the second computational logic module configured to receive a data parsing result for the second source data from the first pipeline.
It should be appreciated that the computer-executable instructions stored in the memory, when executed, cause the at least one processor 1510 to perform the various operations and functions described above in connection with fig. 1-14 in the various embodiments of the present description.
According to one embodiment, a program product, such as a machine-readable medium, is provided. A machine-readable medium may have instructions (i.e., elements described above as being implemented in software) that, when executed by a machine, cause the machine to perform various operations and functions described above in connection with fig. 1-14 in the various embodiments of the present specification.
Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and which causes a computer or processor of the system or apparatus to read out and execute the instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present invention.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-Rs, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or from the cloud via a communications network.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Not all steps and elements in the above flows and system structure diagrams are necessary, and some steps or elements may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the foregoing embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by multiple physical entities separately, or some units may be implemented by some components in multiple independent devices together.
The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
Although the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the embodiments of the present disclosure are not limited to the specific details of the embodiments, and various simple modifications may be made to the technical solutions of the embodiments of the present disclosure within the technical spirit of the embodiments of the present disclosure, and all of them fall within the scope of the embodiments of the present disclosure.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the description is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (18)

1. A method for configuring a computing task, comprising:
in response to receiving a second computing task, determining a data source table and second source data for the second computing task;
checking whether a data source table of the second computing task is used by a first computing task currently running, the first computing task being run using a first pipeline, the first pipeline including a first parsing logic module and a first computing logic module;
when the data source table of the second computing task is used by the first computing task and the second source data is different from the first source data of the first computing task, adding an output configuration of the first parsing logic module, wherein the added output end is used for outputting a data parsing result aiming at the second source data;
adding an input/output configuration of the first computation logic module when the computation logic of the second computation task is the same as the computation logic of the first computation task, the added input configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output for outputting the computation result for the second source data; and
creating a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parsing result for the second source data from the first pipeline.
2. The method of claim 1, further comprising:
adding an input configuration of the first parsing logic module, the added input for receiving the second source data.
3. The method of claim 1, further comprising:
creating a second pipeline when a data source table of the second computing task is used by the first computing task, the second source data is the same as the first source data of the first computing task, and the computational logic of the second computing task is different from the computational logic of the first computing task, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parse result output by the first parsing logic from the first pipeline.
4. The method of claim 1, wherein the computation logic comprises a sequence of sequentially executed sub-computation logics,
wherein, when the sub-computation logic sequence of the second computation task is the same as the sub-computation logic sequence of the first computation task in partial computation sub-logics from the start sub-computation logic, the second computation logic module in the created second pipeline comprises a second sub-computation logic module sequence, each sub-computation logic module in the second sub-computation logic module sequence corresponds to each sub-computation logic from a different computation sub-logic, and an input of the start sub-computation logic module of the second computation logic module sequence is configured to receive an output of a last identical computation sub-logic module of the first computation logic module.
5. The method of claim 4, wherein the sequence of sub-computation logics includes a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
6. The method of claim 1, further comprising:
creating a second pipeline comprising a second parsing logic module and a second computation logic module when a data source table of the second computation task is not used by the first computation task.
7. The method of claim 1, wherein the second computing task is a computing task defined based on an SQL statement;
in response to receiving a second computing task, determining a data source table and a second source data for the second computing task comprises:
and analyzing the SQL statement to determine a data source table and second source data of the second computing task.
8. The method of claim 1, wherein the first and second computing tasks are feature computing tasks based on a sliding window algorithm.
9. An apparatus for configuring a computing task, comprising:
the source data determining unit is used for responding to the receiving of a second computing task and determining a data source table and second source data of the second computing task;
a data source table checking unit, which checks whether a data source table of the second computing task is used by a first computing task which is currently running, wherein the first computing task is run by using a first pipeline, and the first pipeline comprises a first analysis logic module and a first computing logic module;
the analysis logic module configuration unit is used for adding the output configuration of the first analysis logic module when the data source table of the second calculation task is used by the first calculation task and the second source data is different from the first source data of the first calculation task, and the added output end is used for outputting a data analysis result aiming at the second source data;
a computation logic module configuration unit that adds an input/output configuration of the first computation logic module when computation logic of the second computation task is the same as computation logic of the first computation task, the added input terminal being configured to receive a data parsing result for the second source data from the first parsing logic module, and the added output terminal being used to output a computation result for the second source data; and
a pipeline creation unit to create a second pipeline when the computational logic of the second computational task and the computational logic of the first computational task are different, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parsing result for the second source data from the first pipeline.
10. The apparatus of claim 9, the parsing logic module to configure the unit to:
adding an input configuration of the first parsing logic module, the added input for receiving the second source data.
11. The apparatus of claim 9, the pipeline creation unit to:
creating a second pipeline when a data source table of the second computing task is used by the first computing task, the second source data is the same as the first source data of the first computing task, and the computational logic of the second computing task is different from the computational logic of the first computing task, the second pipeline including a second computational logic module, an input of the second computational logic module configured to receive a data parsing result output by the first parsing logic from the first pipeline.
12. The apparatus of claim 9, wherein the computation logic comprises a sequence of sequentially executed sub-computation logic,
wherein, when the sub-computation logic sequence of the second computation task is the same as the sub-computation logic sequence of the first computation task in partial computation sub-logics from the start sub-computation logic, the second computation logic module in the created second pipeline comprises a second sub-computation logic module sequence, each sub-computation logic module in the second sub-computation logic module sequence corresponds to each sub-computation logic from a different computation sub-logic, and an input of the start sub-computation logic module of the second computation logic module sequence is configured to receive an output of a last identical computation sub-logic module of the first computation logic module.
13. The apparatus of claim 12, wherein the sequence of sub-computation logics comprises a filtering sub-logic, a grouping sub-logic, and an aggregation function sub-logic that are executed sequentially.
14. The apparatus of claim 9, the pipeline creation unit to:
creating a second pipeline comprising a second parsing logic module and a second computation logic module when a data source table of the second computation task is not used by the first computation task.
15. The apparatus of claim 9, wherein the second computing task is a computing task defined based on an SQL statement;
the source data determination unit:
and analyzing the SQL statement to determine a data source table and second source data of the second computing task.
16. A system for performing computing tasks, comprising: the system comprises a task configuration platform, a task configuration framework and a task execution platform;
the task configuration platform executing the method of any one of claims 1 to 8 to configure a first computing task and a second computing task in the task configuration framework; and
and the task execution platform responds to the task configuration platform to complete configuration, sends the acquired first source data of the first computing task and the acquired second source data of the second computing task to the task configuration framework to execute the first computing task and the second computing task, and receives an execution result fed back by the task configuration framework.
17. A computing device, comprising:
at least one processor, and
a memory coupled with the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of any of claims 1-8.
18. A machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform the method of any one of claims 1 to 8.
CN202010079567.8A 2020-02-04 2020-02-04 Configuration method, device and execution system of computing task Active CN111324434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010079567.8A CN111324434B (en) 2020-02-04 2020-02-04 Configuration method, device and execution system of computing task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010079567.8A CN111324434B (en) 2020-02-04 2020-02-04 Configuration method, device and execution system of computing task

Publications (2)

Publication Number Publication Date
CN111324434A CN111324434A (en) 2020-06-23
CN111324434B true CN111324434B (en) 2023-03-21

Family

ID=71167085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010079567.8A Active CN111324434B (en) 2020-02-04 2020-02-04 Configuration method, device and execution system of computing task

Country Status (1)

Country Link
CN (1) CN111324434B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279390A (en) * 2012-08-21 2013-09-04 中国科学院信息工程研究所 Parallel processing system for small operation optimizing
US9396023B1 (en) * 2014-12-30 2016-07-19 Qlogic, Corporation Methods and systems for parallel distributed computation
CN107766132A (en) * 2017-06-25 2018-03-06 平安科技(深圳)有限公司 Multi-task scheduling method, application server and computer-readable recording medium
WO2018090784A1 (en) * 2016-11-18 2018-05-24 腾讯科技(深圳)有限公司 Distributed stream calculation-based processing method, system, physical equipment and storage medium
CN108694081A (en) * 2017-04-09 2018-10-23 英特尔公司 Rapid data operation for machine learning and finite state machine
CN108897776A (en) * 2018-06-01 2018-11-27 郑州云海信息技术有限公司 A kind of arithmetic processing method of data information, device and computer storage medium
CN110377429A (en) * 2019-07-24 2019-10-25 深圳乐信软件技术有限公司 A kind of control method, device, server and storage medium that real-time task calculates

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279390A (en) * 2012-08-21 2013-09-04 中国科学院信息工程研究所 Parallel processing system for small operation optimizing
US9396023B1 (en) * 2014-12-30 2016-07-19 Qlogic, Corporation Methods and systems for parallel distributed computation
WO2018090784A1 (en) * 2016-11-18 2018-05-24 腾讯科技(深圳)有限公司 Distributed stream calculation-based processing method, system, physical equipment and storage medium
CN108694081A (en) * 2017-04-09 2018-10-23 英特尔公司 Rapid data operation for machine learning and finite state machine
CN107766132A (en) * 2017-06-25 2018-03-06 平安科技(深圳)有限公司 Multi-task scheduling method, application server and computer-readable recording medium
CN108897776A (en) * 2018-06-01 2018-11-27 郑州云海信息技术有限公司 A kind of arithmetic processing method of data information, device and computer storage medium
CN110377429A (en) * 2019-07-24 2019-10-25 深圳乐信软件技术有限公司 A kind of control method, device, server and storage medium that real-time task calculates

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周玉科 ; 马廷 ; 周成虎 ; 高锡章 ; 范俊甫 ; .MySQL集群与MPI的并行空间分析系统设计与实验.2012,(04),全文. *

Also Published As

Publication number Publication date
CN111324434A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN107506451B (en) Abnormal information monitoring method and device for data interaction
CN109034993A (en) Account checking method, equipment, system and computer readable storage medium
US9020949B2 (en) Method and system for centralized issue tracking
CN110827008A (en) Block chain link point and transaction method
CN110309172B (en) Data calculation method, system, device and electronic equipment
CN111339073A (en) Real-time data processing method and device, electronic equipment and readable storage medium
CN110275861A (en) Date storage method and device, storage medium, electronic device
CN111241182A (en) Data processing method and apparatus, storage medium, and electronic apparatus
CN106293891B (en) Multidimensional investment index monitoring method
CN109299074B (en) Data verification method and system based on templated database view
CN111240772B (en) Block chain-based data processing method, device and storage medium
CN112783867A (en) Database optimization method for meeting real-time big data service requirements and cloud server
CN115599359A (en) Code generation method, device, equipment and medium
CN111338716A (en) Data processing method and device based on rule engine and terminal equipment
CN111291054A (en) Data processing method and device, computer equipment and storage medium
CN113987337A (en) Search method, system, equipment and storage medium based on componentized dynamic arrangement
CN111324434B (en) Configuration method, device and execution system of computing task
CN111324645B (en) Block chain data processing method and device
CN110046172B (en) Online computing data processing method and system
CN111190896A (en) Data processing method, data processing device, storage medium and computer equipment
CN116048981A (en) Method, device, medium and equipment for designing rear-end interface test case
CN115480748A (en) Service arrangement method, device and storage medium
CN114356454A (en) Account checking data processing method, account checking data processing device, account checking data storage medium and program product
CN113612832A (en) Streaming data distribution method and system
CN112256731A (en) Data display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant