CN112506636A - Distributed task scheduling method and device based on directed acyclic graph and storage medium - Google Patents

Distributed task scheduling method and device based on directed acyclic graph and storage medium Download PDF

Info

Publication number
CN112506636A
CN112506636A CN202011487630.8A CN202011487630A CN112506636A CN 112506636 A CN112506636 A CN 112506636A CN 202011487630 A CN202011487630 A CN 202011487630A CN 112506636 A CN112506636 A CN 112506636A
Authority
CN
China
Prior art keywords
processing flow
processing
flow
operation result
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011487630.8A
Other languages
Chinese (zh)
Inventor
范强
张翔南
凌瀛洲
冯超
王家卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongtian Kongming Technology Co ltd
Original Assignee
Beijing Zhongtian Kongming Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongtian Kongming Technology Co ltd filed Critical Beijing Zhongtian Kongming Technology Co ltd
Priority to CN202011487630.8A priority Critical patent/CN112506636A/en
Publication of CN112506636A publication Critical patent/CN112506636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a distributed task scheduling method, a distributed task scheduling device and a storage medium based on a directed acyclic graph, wherein the method comprises the steps of constructing the directed acyclic graph, wherein the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of sub-flows; sequentially executing a plurality of processing flows in sequence, wherein the execution of the second processing flow comprises the following steps: executing a plurality of sub-processes in parallel; storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into a distributed file system; the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow. By splitting each processing flow in the directed acyclic graph into independent tasks and caching the operation result of each task, the previously operated flow is not required to be repeatedly executed when the parameters are mistakenly tested.

Description

Distributed task scheduling method and device based on directed acyclic graph and storage medium
Technical Field
The present invention relates to the field of task scheduling technologies, and in particular, to a distributed task scheduling method and apparatus based on a directed acyclic graph, and a storage medium.
Background
The data processing flow in the prior art is as follows:
fig. 1 is a schematic diagram of a data processing process according to an embodiment in the prior art, in which after raw data sequentially enters a processing flow 1 and a processing flow 2 for processing, the raw data enters a processing flow 3 and a processing flow 4, and finally a required result is obtained. In a processing flow, different parameters are respectively input for processing, and then different operation results are output, and most of the existing schemes divide a processing flow into two independent flows and then execute the flows again aiming at the conditions of one processing flow and different parameters. For example, as shown in fig. 2, fig. 2 is a schematic diagram of a data processing process according to another embodiment in the prior art, in this embodiment, a processing flow 3 includes two operation methods: "processing flow 3-1" and "processing flow 3-2", the processing flow 4 also includes two operation methods: "Process flow 4-1" and "Process flow 4-2". The processing procedures of the processing flow 3-1 and the processing flow 3-2 are the same, and the processing procedures of the processing flow 4-1 and the processing flow 4-2 are the same, except that the parameters input when the two flows are processed are different.
Therefore, the data processing flow in fig. 2 is split into two routes from the beginning, so that both flows execute "processing flow 1" and "processing flow 2", and if the computing resources (the number of computing servers) are relatively tight, the second flow can only be executed after one of the computations is completed. If a problem occurs in one of the intermediate processing flows, the whole processing process needs to be executed again after the problem of the intermediate processing flow is solved. For example: the "processing flow 2" has a problem, and after the problem of the processing flow 2 is solved, the "processing flow 1" needs to be executed again, and then the "processing flow 2" needs to be executed again, which wastes time and computing resources.
The data cleaning and machine learning algorithms need to be configured with a large number of parameters, the parameters need to be adjusted continuously when each processing flow is executed, and if the parameters of one processing flow need to be modified, the whole process needs to be operated again. Time and computational resources are also wasted.
Therefore, in the big data era, as the data volume is larger and larger, how to efficiently utilize the computing resources and reduce the trial-and-error cost (mainly referring to the time cost of parameter error or parameter adjustment) is a problem that needs to be solved at present.
Disclosure of Invention
Objects of the invention
The invention aims to provide a distributed task scheduling method, a distributed task scheduling device and a storage medium based on a directed acyclic graph.
(II) technical scheme
To solve the above problem, according to an aspect of the present invention, the present invention provides a distributed task scheduling method based on a directed acyclic graph, including: constructing a directed acyclic graph, wherein the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of independent sub-flows; sequentially executing a plurality of processing flows in sequence, wherein the execution of the second processing flow comprises the following steps: executing a plurality of sub-flows included in the second processing flow in parallel; storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into a distributed file system; the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow.
Further, the method also comprises the following steps: and extracting the operation result of the first processing flow from the storage to the distributed file system, and inputting the operation result into the next processing flow.
Further, the method also comprises the following steps: and extracting the operation results of the plurality of sub-processes in the second processing flow from the storage to the distributed file system, comparing the operation results of the plurality of sub-processes to obtain an optimal operation result, and inputting the optimal operation result into the next processing flow.
Further, a plurality of sub-processes in the same second process flow include the same input data.
Further, each processing flow comprises different input parameters and different operation logic; each of the sub-processes includes different input parameters and different operational logic.
According to another aspect of the present invention, the present invention further provides a distributed task scheduling apparatus based on a directed acyclic graph, including: the system comprises a construction module, a processing module and a processing module, wherein the construction module is used for constructing a directed acyclic graph, the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of independent sub-flows; an execution module, configured to sequentially execute a plurality of processing flows in sequence, where executing the second processing flow is: executing a plurality of sub-flows included in the second processing flow in parallel; the storage module is used for storing the operation result obtained after the execution of the first processing flow and the operation result obtained after the execution of each sub-flow into the distributed file system; the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow.
Further, the method also comprises the following steps: and the first data extraction module is used for extracting the operation result of the first processing flow from the storage to the distributed file system and inputting the operation result into the next processing flow.
Further, the method also comprises the following steps: the second data extraction module is used for extracting the operation results of the plurality of sub-processes in the second processing flow from the storage to the distributed file system; and the comparison module is used for comparing the operation results of the plurality of sub-processes to obtain an optimal operation result, and inputting the optimal operation result into the next processing process.
According to another aspect of the present invention, there is also provided a storage medium having stored thereon a computer program which, when executed by a processor, performs the method for distributed task scheduling based on directed acyclic graphs as set forth above.
(III) advantageous effects
The technical scheme of the invention has the following beneficial technical effects:
according to the invention, each processing flow is divided into independent tasks, the operation result of each task is cached, if a problem occurs in a certain intermediate processing flow, only the current flow needs to be operated again after the problem is solved, and the previously operated flow does not need to be executed repeatedly.
If the parameters input in a certain processing flow are not satisfactory, only the current and the following flows need to be executed again after the parameters are modified. Not only saves time, but also can reduce the task of repeated calculation.
Drawings
FIG. 1 is a schematic diagram of a data processing process according to an embodiment of the prior art;
FIG. 2 is a schematic diagram of a data processing process according to another embodiment of the prior art;
FIG. 3 is a flowchart illustrating steps of a distributed task scheduling method based on a directed acyclic graph according to the present invention;
FIG. 4 is a schematic diagram of a data processing process of a first embodiment of a distributed task scheduling method based on a directed acyclic graph according to the present invention;
FIG. 5 is a schematic diagram of a data processing process of a second embodiment of the distributed task scheduling method based on the directed acyclic graph according to the present invention;
fig. 6 is a schematic diagram of a data processing process of a third embodiment of the distributed task scheduling method based on the directed acyclic graph provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The present invention will be described in detail below with reference to the accompanying drawings and examples. Fig. 3 is a flowchart of steps of a distributed task scheduling method based on a directed acyclic graph, please refer to fig. 3, which provides a distributed task scheduling method based on a directed acyclic graph in this embodiment, and includes the following steps:
step S1: and constructing a directed acyclic graph, wherein the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of independent sub-flows.
Specifically, all the processing flows together form an overall flow for processing data in the directed acyclic graph, each processing flow has respective operation logic and input parameters, and each processing flow comprises different input parameters and different operation logic. The first process flow does not include any sub-flows and the second process flow includes a plurality of independent sub-flows.
The second processing flow is branched to obtain a plurality of independent sub-flows, and each sub-flow needs to be independently and parallelly executed when the second processing flow is executed; each sub-process has respective operation logic and input parameters, and each sub-process comprises different input parameters and different operation logic. This makes the operation result different for each sub-flow.
Step S2: sequentially executing a plurality of processing flows in sequence, wherein the execution of the second processing flow comprises the following steps: multiple sub-processes are executed in parallel.
Specifically, the processing flows need to be executed in sequence, and the subsequent processing flow can be executed only after the previous processing flow is executed; when the sub-processes are executed, the sub-processes are executed independently and in parallel, and each sub-process does not influence each other. For example: the processing flow A and the processing flow B are both a first processing flow, the processing flow C is a second processing flow, and the sub-flow included in the processing flow C is as follows: the C1 subflow, the C2 subflow, and the C3 subflow. In the execution process flow, the process flow A, the process flow B and the process flow C are required to be executed in sequence, and when the process flow B is finished and the process flow C is required to be executed, the process flow C1, the process flow C2 and the process flow C3 are independently and parallelly executed.
Step S3: and storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into the distributed file system.
Specifically, all the processing flows jointly form a total flow for processing data in the directed acyclic graph, each processing flow has an own operation result, and after the execution is finished, the operation results need to be stored in the distributed file system, so that the subsequent extraction is facilitated. Similarly, the operation results obtained by the multiple independent sub-processes in each second processing flow also need to be stored in the distributed file system. For example, the operation results of the a process flow and the B process flow need to be stored in the distributed file system, and the operation result of the C1 sub-flow, the operation result of the C2 sub-flow, and the operation result of the C3 sub-flow in the C process flow are also stored in the distributed file system.
Step S4: and extracting the operation result of the first processing flow from the distributed file system and/or extracting the operation results of a plurality of sub-flows in the second processing flow from the distributed file system.
Specifically, in this embodiment, the input data of the initial processing flow is preset data set by the user, and the input data of the next processing flow is the operation result of the previous processing flow. Therefore, on the premise that the operation result is stored in the distributed file system in step S3, step S4 of the present embodiment extracts the operation result of the previous process flow from being stored in the distributed file system as input data of the next process flow.
Step S5: inputting the operation result of the first processing flow extracted from the distributed file system to the next processing flow as the input data of the next processing flow; and/or extracting the operation results of a plurality of sub-processes in the second processing flow from the distributed file system, comparing the operation results of the plurality of sub-processes, and inputting the optimal operation result into the next processing flow as the input data of the next processing flow.
Specifically, on the basis of step S4, the operation results of each sub-process are extracted from the file stored in the distributed file system, and the operation results are compared according to a comparison method set by the user to obtain an optimal operation result, which is used as the operation result of the processing flow, so as to input the next processing flow.
Example one
Fig. 4 is a schematic data processing process diagram of a first embodiment of the distributed task scheduling method based on the directed acyclic graph, provided by the present invention, and please refer to fig. 4 for further explanation of the distributed task scheduling method based on the directed acyclic graph.
After a user starts a task, a directed acyclic graph is constructed, and each processing procedure in the directed acyclic graph is a node in fig. 4. After the original data sequentially enters the processing flow 1 and the processing flow 2 for processing, the original data enter the processing flow 3 and the processing flow 4. In the present embodiment, one processing flow is divided into two independent flows and then executed, for the case of one processing flow and different parameters.
As shown in fig. 4, the process flow 3 is divided into two different processes: "treatment scheme 3-1" and "treatment scheme 3-2" divide treatment scheme 4 into two distinct processes: "Process flow 4-1" and "Process flow 4-2", the process flow 3-1 and the process flow 3-2 are the same, and the process flow 4-1 and the process flow 4-2 are the same, except that the parameters input when the two flows are processed are different.
In the above embodiment, two independent processes may be run in parallel.
Firstly, executing the processing flow 1, and after the execution is finished, storing the operation result data in the Alluxio. Alluxio is a memory-based distributed file system, which is a middleware structured between the underlying distributed file system and the overlying distributed computing framework, and the main responsibility of Alluxio is to provide access services for data in the form of files in memory or other storage facilities.
And extracting the operation result of the processing flow 1 stored in the Alluxio, taking the operation result as input data of a processing flow 2, executing the processing flow 2, performing data processing, and still storing the operation result of the processing flow 2 on the Alluxio after the processing is finished.
Next, the "process flow 3" branches to obtain two sub-flows: "treatment scheme 3-1" and "treatment scheme 3-2". The process flow 3-1 and the process flow 3-2 are executed in parallel, both sub-processes having a common input data: the operation results of the processing flow 2, however, the two sub-processing flows are independent from each other and do not affect each other.
If the user is not satisfied with the final output result or with the input parameters in a certain processing flow, the user only needs to modify the parameters of the current flow or the current sub-flow, extract the operation result of the previous flow from the Alluxio, input the operation result into the modified current flow or the current sub-flow, and re-execute the current flow and the subsequent flow.
Example two
Fig. 5 is a schematic data processing process diagram of a first embodiment of the distributed task scheduling method based on the directed acyclic graph, provided by the present invention, and please refer to fig. 5 for further explanation of the distributed task scheduling method based on the directed acyclic graph.
After a user starts a task, a directed acyclic graph is constructed, and each processing procedure in the directed acyclic graph is a node in fig. 5.
Firstly, executing the processing flow 1, and after the execution is finished, storing the operation result data in the Alluxio. .
And extracting the operation result of the processing flow 1 stored in the Alluxio, taking the operation result as input data of a processing flow 2, executing the processing flow 2, performing data processing, and still storing the operation result of the processing flow 2 on the Alluxio after the processing is finished.
Next, the "process flow 3" branches to obtain four sub-flows: "treatment Process 3-1", "treatment Process 3-2", "treatment Process 3-3", and "treatment Process 3-4". The processing flow 3-1, the processing flow 3-2, the processing flow 3-3 and the processing flow 3-4 are executed in parallel, and the four processing flows all have common input data: the operation results of the processing flow 2, however, are independent of each other and do not affect each other.
If the processing flow 3-2 is completed first, the processing flow 3-1, the processing flow 3-3 and the processing flow 3-4 are not completed yet, but because the input data of the subsequent processing flow 4-x-1 and the subsequent processing flow 4-x-2 is the operation result of the processing flow 3-2, the two processes of the processing flow 4-x-1 and the processing flow 4-x-2 can be started directly.
By analogy, after the eight processes from the result 1-1 to the result 4-2 are operated, the comparison result node starts to execute, the operation results of the eight sub-processes are compared, and the optimal operation result is used as the input data of the next processing flow.
EXAMPLE III
Fig. 6 is a schematic diagram of a data processing process of a second embodiment of the distributed task scheduling method based on the directed acyclic graph, provided by the present invention, and please refer to fig. 6 for further explanation of the distributed task scheduling method based on the directed acyclic graph.
If the user is not satisfied with the results of the whole process, the input parameters of a certain process flow need to be adjusted, for example: by adjusting the input parameters of the process flow 3-2, theoretically, the modified input parameters will not cause the operation result of the process flow 2 to change, and will not cause the operation results of the process flow 3-1, the process flow 3-3, the process flow 3-4, and the sub-flows of these flows to change.
Thus, after the user initiates a task, the flow chart constructed is shown in FIG. 6.
Firstly, executing a processing flow 3-2, reading an operation result of the processing flow 2 from Alluxio as input data of the processing flow 3-2, then executing the processing flow 4-2-1 and the processing flow 4-2-2 in parallel, after the result 2-1 and the result 2-2 are completed, extracting other cached six operation results from the Alluxio, executing a comparison result task to obtain an optimal operation result, and using the optimal operation result as input data of the next processing flow.
The invention aims to protect a distributed task scheduling method, a device and a storage medium based on a directed acyclic graph, wherein the distributed task scheduling method based on the directed acyclic graph comprises the steps of constructing the directed acyclic graph, wherein the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of sub-flows; sequentially executing a plurality of processing flows in sequence, wherein the execution of the second processing flow comprises the following steps: executing a plurality of sub-processes in parallel; storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into a distributed file system; the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow. A distributed task scheduling device building module and a storage medium based on the directed acyclic graph are also protected. By splitting each processing flow in the directed acyclic graph into independent tasks and caching the operation result of each task, the previously operated flow is not required to be repeatedly executed when the parameters are mistakenly tested.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (9)

1. A distributed task scheduling method based on a directed acyclic graph is characterized by comprising the following steps:
constructing a directed acyclic graph, wherein the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of independent sub-flows;
sequentially executing a plurality of the processing flows in sequence, wherein the executing of the second processing flow comprises: executing a plurality of the sub-processes included in the second processing flow in parallel;
storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into a distributed file system;
the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow.
2. The method of claim 1, further comprising:
and extracting the operation result of the first processing flow from the storage to the distributed file system, and inputting the operation result into the next processing flow.
3. The method of claim 1, further comprising:
extracting operation results of a plurality of sub-processes in the second processing flow from the storage to distributed file system;
and comparing the operation results of the plurality of sub-flows to obtain an optimal operation result, and inputting the optimal operation result into the next processing flow.
4. The method of claim 1,
a plurality of the sub-processes in the same second process flow include the same input data.
5. The method of claim 1,
each processing flow comprises different input parameters and different operation logics;
each of the sub-processes includes different input parameters and different operational logic.
6. A distributed task scheduling device based on a directed acyclic graph is characterized in that,
the directed acyclic graph processing system comprises a building module, a processing module and a processing module, wherein the building module is used for building a directed acyclic graph, the directed acyclic graph comprises a plurality of processing flows, the processing flows comprise a first processing flow and a second processing flow, and the second processing flow comprises a plurality of independent sub-flows;
an execution module, configured to sequentially execute the plurality of processing flows in sequence, where executing the second processing flow is: executing a plurality of the sub-processes included in the second processing flow in parallel;
the storage module is used for storing the operation result obtained after the first processing flow is executed and the operation result obtained after each sub-flow is executed into the distributed file system;
the input data of the initial processing flow is preset data, and the operation result of the previous processing flow is input data of the next processing flow.
7. The apparatus of claim 6, further comprising:
and the first data extraction module is used for extracting the operation result of the first processing flow from the storage to the distributed file system and inputting the operation result into the next processing flow.
8. The apparatus of claim 6, further comprising:
the second data extraction module is used for extracting the operation results of the plurality of sub-processes in the second processing flow from the storage to the distributed file system;
and the comparison module is used for comparing the operation results of the plurality of sub-processes to obtain an optimal operation result, and inputting the optimal operation result into the next processing process.
9. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of any of claims 1 to 5.
CN202011487630.8A 2020-12-16 2020-12-16 Distributed task scheduling method and device based on directed acyclic graph and storage medium Pending CN112506636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011487630.8A CN112506636A (en) 2020-12-16 2020-12-16 Distributed task scheduling method and device based on directed acyclic graph and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011487630.8A CN112506636A (en) 2020-12-16 2020-12-16 Distributed task scheduling method and device based on directed acyclic graph and storage medium

Publications (1)

Publication Number Publication Date
CN112506636A true CN112506636A (en) 2021-03-16

Family

ID=74972652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011487630.8A Pending CN112506636A (en) 2020-12-16 2020-12-16 Distributed task scheduling method and device based on directed acyclic graph and storage medium

Country Status (1)

Country Link
CN (1) CN112506636A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510338A (en) * 2022-04-19 2022-05-17 浙江大华技术股份有限公司 Task scheduling method, task scheduling device and computer readable storage medium
CN115695432A (en) * 2023-01-04 2023-02-03 河北华通科技股份有限公司 Load balancing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
CN110554909A (en) * 2019-09-06 2019-12-10 腾讯科技(深圳)有限公司 task scheduling processing method and device and computer equipment
WO2020246965A1 (en) * 2019-06-04 2020-12-10 Huawei Technologies Co., Ltd. Task distribution across multiple processing devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
WO2020246965A1 (en) * 2019-06-04 2020-12-10 Huawei Technologies Co., Ltd. Task distribution across multiple processing devices
CN110554909A (en) * 2019-09-06 2019-12-10 腾讯科技(深圳)有限公司 task scheduling processing method and device and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510338A (en) * 2022-04-19 2022-05-17 浙江大华技术股份有限公司 Task scheduling method, task scheduling device and computer readable storage medium
CN114510338B (en) * 2022-04-19 2022-09-06 浙江大华技术股份有限公司 Task scheduling method, task scheduling device and computer readable storage medium
CN115695432A (en) * 2023-01-04 2023-02-03 河北华通科技股份有限公司 Load balancing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Chen et al. New Petri net structure and its application to optimal supervisory control: Interval inhibitor arcs
JP2021508125A (en) Matrix multiplier
Zhang et al. FastSV: A distributed-memory connected component algorithm with fast convergence
US20230026006A1 (en) Convolution computation engine, artificial intelligence chip, and data processing method
CN112506636A (en) Distributed task scheduling method and device based on directed acyclic graph and storage medium
CN110633785B (en) Method and system for calculating convolutional neural network
US20130282649A1 (en) Deterministic finite automation minimization
CN108875914B (en) Method and device for preprocessing and post-processing neural network data
CN112070202B (en) Fusion graph generation method and device and computer readable storage medium
CN111352896B (en) Artificial intelligence accelerator, equipment, chip and data processing method
CN106326005B (en) Parameter automatic tuning method for iterative MapReduce operation
CN113449842A (en) Distributed automatic differentiation method and related device
CN112712125B (en) Event stream pattern matching method and device, storage medium and processor
CN116933864A (en) Universal high-precision distributed algorithm training method and system
WO2022057459A1 (en) Tensorcore-based int4 data type processing method and system, device, and medium
CN114595047A (en) Batch task processing method and device
Jiang et al. An optimized resource scheduling strategy for Hadoop speculative execution based on non-cooperative game schemes
JPH0764766A (en) Maximum and minimum value calculating method for parallel computer
CN111340215A (en) Network model reasoning acceleration method and device, storage medium and intelligent equipment
CN111291186A (en) Context mining method and device based on clustering algorithm and electronic equipment
Mirza et al. Mapping and scheduling of dataflow graphs—a systematic map
Garg et al. A critical review of Artificial Bee Colony optimizing technique in software testing
CN111260038B (en) Implementation method and device of convolutional neural network, electronic equipment and storage medium
CN113569727B (en) Method, system, terminal and medium for identifying construction site in remote sensing image
AU2021102317A4 (en) A System and A Method for Big And Stream Data Analytics using Incremental Mapreduce Framework for Smart City

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination