CN106293940A - A kind of method of parallel race batch in financial industry - Google Patents

A kind of method of parallel race batch in financial industry Download PDF

Info

Publication number
CN106293940A
CN106293940A CN201610645308.0A CN201610645308A CN106293940A CN 106293940 A CN106293940 A CN 106293940A CN 201610645308 A CN201610645308 A CN 201610645308A CN 106293940 A CN106293940 A CN 106293940A
Authority
CN
China
Prior art keywords
parallel
data
batch
race
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610645308.0A
Other languages
Chinese (zh)
Inventor
丁周芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur General Software Co Ltd
Original Assignee
Inspur General Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur General Software Co Ltd filed Critical Inspur General Software Co Ltd
Priority to CN201610645308.0A priority Critical patent/CN106293940A/en
Publication of CN106293940A publication Critical patent/CN106293940A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/481Exception handling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/483Multiproc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/503Resource availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Abstract

The invention discloses a kind of method of parallel race batch in financial industry, it realizes process: task is cut into several subtasks, and data set is divided into several data slice, data fragmentation according to each subtask and correspondence thereof build one can the independent subtask bag of executed in parallel, these subtasks bag is put into unified thread pool carries out independent executed in parallel.In a kind of financial industry of the present invention, the method for parallel race batch compared with prior art, can make full use of the resource of existing system, improves the performance running batch processing.Performing state and error message by record subtask during executed in parallel, data perform state and are tracked with fault-tolerant with error message to tasks carrying process, practical, applied widely.

Description

A kind of method of parallel race batch in financial industry
Technical field
The present invention relates to parallel computation and batch system field, parallel race batch in a kind of financial industry Method.
Background technology
Running batch is the business function of the very core of in financial industry, and meter carries, and bears interest, and the business such as cut day is all being run batch Middle execution, its data volume processed is very big, and reliability and performance requirement to program are the highest.Traditional mode is single line Journey one data processes with connecing a data, and do so can effectively ensure reliability, but its performance is the lowest, and data volume is big Time, running batch time can be the longest.If wherein there being a data to perform to make mistakes, other all of data follow-up all cannot perform race Batch, run the data criticized and be also possible to rollback.Other all of business operation the most all can be terminated when running batch processing, because of This has substantial amounts of CPU, internal memory, and the resource such as network is available, but current mode utilizes these resources the most fully, Cause the significant wastage of resource.
As can be seen here, in existing technical scheme, running batch system and have degraded performance, poor fault tolerance, the wasting of resources is tight The problem such as heavily.Based on this, a kind of high-performance is now provided, has the method for parallel race batch in certain fault-tolerant ability, financial industry.
Summary of the invention
The technical assignment of the present invention is for above weak point, it is provided that a kind of high-performance, have certain fault-tolerant ability, and money The method of parallel race batch in the financial industry that source is utilized effectively.
A kind of parallel method run batch in financial industry, it realizes process and is: task is cut into several subtasks, and Data set is divided into several data slice, and building one according to the data fragmentation of each subtask and correspondence thereof can hold parallel The independent subtask bag of row, puts into these subtasks bag and carries out independent executed in parallel in unified thread pool.
Before carrying out task section, it is necessary first to carry out the step of resource pretreatment, this step particularly as follows:
Resource is locked;
Whether inspection system can carry out running batch processing, can perform the race batch then all of race of inquiry acquisition and criticize scheduling times Business, otherwise returns;
Scheduler task is criticized in each race of circular treatment, subsequently into the step of task section;
Corresponding, criticize after scheduler task all runs succeeded in all of race, resource is unlocked.
Described task slicing processes is: according to the tasks in parallel degree parameter of configuration, task is carried out slicing operation, after cutting Subtask be stored in work distribution chart.
The process of data set burst is: according to subtask number, data being carried out Fragmentation, the data ID after burst is stored in Data distribution list, and associate with subtask foundation.
Carrying out parallel implementation after building independent subtask bag is:
Process each business datum in each data fragmentation associated by parallel task bag;
Updating the data execution state of data distribution list according to whether running succeeded of business datum, data process successfully Then being set to successfully by state, data process and state are the most then set to failure, and misregistration information, terminate this subtask simultaneously Execution;Data perform state and no longer will be performed when running batch weight examination for successfully record;
Circulation always performs above-mentioned two step, until all successes of the data in the data fragmentation corresponding to this subtask Process completes, or processes unsuccessfully;
According to subtask implementation status on the data fragmentation of its correspondence, the subtask updating work distribution chart performs shape State, subtask all runs succeeded in the data of the data fragmentation of its correspondence, then the execution state of this subtask be updated to Success, is otherwise updated to failure, and misregistration information, and execution status of task is that successful subtask will not when running batch weight examination Perform again;
Wait that all of parallel subtasks bag has all performed;
Carrying out idempotent verification, verifying whether that all of parallel subtasks bag all runs succeeded, if the most all performed Success, then terminate performing, wait and run batch weight examination.
Run in batch list parallel, also include that task is followed the trail of and fault-tolerant step, i.e. pass through during executed in parallel Record subtask performs state and error message, data perform state and error message tasks carrying process is tracked with Fault-tolerant.
Described task is followed the trail of:
The execution journal running batch scheduler task that record is carrying out;
According to the implementation status of race batch scheduler task, updating the execution state running batch scheduler task, race batch scheduler task is held Go successfully, then its execution state is updated to successfully, otherwise its execution state is updated to failure.State is successfully to run batch tune Degree task no longer will be performed when running batch weight examination;
Wait that all of race is criticized scheduler task and performed, check whether that scheduler task all successful execution are criticized in all of race, Be carried out successfully if all of running batch scheduler task, then update all of race criticizing scheduler task state is initial state, under wait The race on one business date is criticized;Otherwise terminate performing, wait and run batch weight examination.
In a kind of financial industry of the present invention, compared to the prior art the method for parallel race batch, has the advantages that
The parallel method run batch in a kind of financial industry of the present invention, by by an original big task at a big number It is converted into the parallel race batch respectively in corresponding small data set of multiple little task according to the batch processing of running on collection, such that it is able to fully Utilize the resource of existing system, improve the performance running batch processing.Shape is performed by record subtask during executed in parallel State and error message, data perform state and are tracked tasks carrying process with fault-tolerant with error message;The present invention has High-performance, fault-tolerant, make full use of the features such as system resource, can be widely applied in the race batch processing system of financial industry, real Strong by property, applied widely.
Accompanying drawing explanation
Accompanying drawing 1 is the architecture principle figure of the present invention.
Accompanying drawing 2 is the process chart of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawings and specific embodiment the invention will be further described.
As shown in drawings, the present invention provides in a kind of financial industry the parallel method run batch, and it realizes process and is: by will Big task is cut into multiple subtask, and large data sets is divided into multiple small data sheet, then according to each subtask and Corresponding data fragmentation build one can the independent subtask bag of executed in parallel, and these subtasks bag is put into unified thread Pond carries out independent executed in parallel.Thus original big task race batch processing on a large data sets is converted For multiple little tasks, parallel race the in corresponding small data set is criticized respectively, such that it is able to make full use of the resource of existing system, Improve the performance running batch processing.
The technical scheme that the present invention proposes is as follows:
As shown in Figure 1, batch system of running generally is run batch scheduler task by many and is formed, due to each run batch scheduler task it Between there is business dependence, it is impossible to executed in parallel, running between batch scheduler task is to go here and there one by one according to its work flow Row performs.Running inside batch scheduler task is executed in parallel, uses the thought of fork-join, according to the tasks in parallel of configuration Degree n, is n the little task that can independently execute by the big Task-decomposing that is run on N data, for front n-1 task, The data that task k (1≤k≤n-1) processes are from (k-1) * N/n+1 to k*N/n, and the data that last task n processes are from (n- 1) * N/n+1 to N.
Disposed of in its entirety flow process is as shown in Fig. 2 system disposed of in its entirety flow chart.Specifically comprise the following steps that
Resource is locked;
Whether inspection system can carry out running batch processing, can perform the race batch then all of race of inquiry acquisition and criticize scheduling times Business, otherwise returns;
Scheduler task is criticized in each race of circular treatment;
According to the tasks in parallel degree parameter of configuration, task being carried out cutting, the subtask after cutting is stored in work distribution chart;
According to subtask number, data being carried out burst, the data ID after burst is stored in data distribution list, and builds with subtask Vertical association;
Data fragmentation according to each subtask and correspondence thereof build one can the subtask bag of executed in parallel, and by son Task bag is put into unified thread pool and is carried out executed in parallel;
Process each business datum in each data fragmentation associated by parallel task bag;
Updating the data execution state of data distribution list according to whether running succeeded of business datum, data process successfully Then being set to successfully by state, data process and state are the most then set to failure, and misregistration information, terminate this subtask simultaneously Execution.Data perform state and no longer will be performed when running batch weight examination for successfully record;
Circulation always performs above-mentioned process step, until all successes of the data in the data fragmentation corresponding to this subtask Process completes, or processes unsuccessfully;
According to subtask implementation status on the data fragmentation of its correspondence, the subtask updating work distribution chart performs shape State, subtask all runs succeeded in the data of the data fragmentation of its correspondence, then the execution state of this subtask be updated to Success, is otherwise updated to failure, and misregistration information.Execution status of task is that successful subtask will not when running batch weight examination Perform again;
Wait that all of parallel subtasks bag has all performed;
Idempotent verifies, and verifies whether that all of parallel subtasks bag all runs succeeded, if the most all run succeeded, Then terminate performing, wait and run batch weight examination;
Record this race and criticize the execution journal of scheduler task;
According to the implementation status of race batch scheduler task, updating the execution state running batch scheduler task, race batch scheduler task is held Go successfully, then its execution state is updated to successfully, otherwise its execution state is updated to failure.State is successfully to run batch tune Degree task no longer will be performed when running batch weight examination;
Wait that all of race is criticized scheduler task and performed, check whether that scheduler task all successful execution are criticized in all of race, Be carried out successfully if all of running batch scheduler task, then update all of race criticizing scheduler task state is initial state, under wait The race on one business date is criticized;Otherwise terminate performing, wait and run batch weight examination;
All of race is criticized after scheduler task all runs succeeded, and removes task distribution and distributes data with data;
Resource is unlocked.
The present invention uses java language to realize;
Work distribution chart reference model in the present invention is as follows:
Data distribution list reference model in the present invention is as follows:
Control parameter in the present invention is as follows:
Sequence number Parameter name Parameter Value Type Describe
1 BATCH_PROCESS_TASKS Positive integer Tasks in parallel degree
So, it is possible to achieve in financial industry, run batch processing parallel, solving existing race, to criticize systematic function low, fault-tolerance Difference, the problem such as serious waste of resources.
By detailed description of the invention above, described those skilled in the art can readily realize the present invention.But should Working as understanding, the present invention is not limited to above-mentioned detailed description of the invention.On the basis of disclosed embodiment, described technical field Technical staff can the different technical characteristic of combination in any, thus realize different technical schemes.
In addition to the technical characteristic described in description, it is the known technology of those skilled in the art.

Claims (7)

1. the parallel method run batch in a financial industry, it is characterised in that it realizes process and is: task is cut into several Subtask, and data set is divided into several data slice, build one according to the data fragmentation of each subtask and correspondence thereof Individual can the independent subtask bag of executed in parallel, these subtasks bag is put into and is carried out independent holding parallel in unified thread pool OK.
The method of parallel race batch in a kind of financial industry the most according to claim 1, it is characterised in that cut carrying out task Before sheet, it is necessary first to carry out the step of resource pretreatment, this step particularly as follows:
Resource is locked;
Whether inspection system can carry out running batch processing, can perform the race batch then all of race of inquiry acquisition and criticize scheduler task, no Then return;
Scheduler task is criticized in each race of circular treatment, subsequently into the step of task section;
Corresponding, criticize after scheduler task all runs succeeded in all of race, resource is unlocked.
The method of parallel race batch in a kind of financial industry the most according to claim 1, it is characterised in that described task is cut into slices Process is: according to the tasks in parallel degree parameter of configuration, task is carried out slicing operation, and the subtask after cutting is stored in task distribution Table.
The method of parallel race batch in a kind of financial industry the most according to claim 1, it is characterised in that data set burst Process is: according to subtask number, data being carried out Fragmentation, the data ID after burst is stored in data distribution list, and appoints with son Association is set up in business.
The method of parallel race batch in a kind of financial industry the most according to claim 1, it is characterised in that build independent son times Carrying out parallel implementation after business bag is:
Process each business datum in each data fragmentation associated by parallel task bag;
Updating the data execution state of data distribution list according to whether running succeeded of business datum, data process the most then will State is set to successfully, and data process and state is the most then set to failure, and misregistration information, terminate holding of this subtask simultaneously OK;Data perform state and no longer will be performed when running batch weight examination for successfully record;
Circulation always performs above-mentioned two step, until the data in the data fragmentation corresponding to this subtask are all successfully processed Complete, or process unsuccessfully;
According to subtask implementation status on the data fragmentation of its correspondence, the subtask updating work distribution chart performs state, Subtask all runs succeeded in the data of the data fragmentation of its correspondence, then the execution state of this subtask be updated to into Merit, is otherwise updated to failure, and misregistration information, and execution status of task is that successful subtask will no longer when running batch weight examination Perform;
Wait that all of parallel subtasks bag has all performed;
Carry out idempotent verification, verify whether that all of parallel subtasks bag all runs succeeded, if the most all run succeeded, Then terminate performing, wait and run batch weight examination.
The method of parallel race batch in a kind of financial industry the most according to claim 1, it is characterised in that criticized parallel race Cheng Zhong, also includes that task is followed the trail of and fault-tolerant step, i.e. during executed in parallel by record subtask perform state with Error message, data perform state and are tracked tasks carrying process with fault-tolerant with error message.
The method of parallel race batch in a kind of financial industry the most according to claim 6, it is characterised in that described task is followed the trail of With fault-tolerant detailed process it is:
The execution journal running batch scheduler task that record is carrying out;
According to the implementation status of race batch scheduler task, updating the execution state running batch scheduler task, race batch scheduler task performs into Merit, then be updated to successfully by its execution state, otherwise its execution state is updated to failure.State is successfully to run batch scheduling to appoint It is engaged in when running batch weight examination no longer will be performed;
Wait that all of race is criticized scheduler task and performed, check whether that scheduler task all successful execution are criticized in all of race, if All of race is criticized scheduler task and is carried out successfully, then update all of race criticizing scheduler task state is initial state, waits the next one The race on business date is criticized;Otherwise terminate performing, wait and run batch weight examination.
CN201610645308.0A 2016-08-08 2016-08-08 A kind of method of parallel race batch in financial industry Pending CN106293940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610645308.0A CN106293940A (en) 2016-08-08 2016-08-08 A kind of method of parallel race batch in financial industry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610645308.0A CN106293940A (en) 2016-08-08 2016-08-08 A kind of method of parallel race batch in financial industry

Publications (1)

Publication Number Publication Date
CN106293940A true CN106293940A (en) 2017-01-04

Family

ID=57667025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610645308.0A Pending CN106293940A (en) 2016-08-08 2016-08-08 A kind of method of parallel race batch in financial industry

Country Status (1)

Country Link
CN (1) CN106293940A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230130A (en) * 2017-12-04 2018-06-29 阿里巴巴集团控股有限公司 Day cuts the method, apparatus and electronic equipment of data verification
CN108415768A (en) * 2017-02-09 2018-08-17 财付通支付科技有限公司 A kind of data batch processing method and system
CN109032796A (en) * 2018-07-18 2018-12-18 北京京东金融科技控股有限公司 A kind of data processing method and device
CN110147373A (en) * 2019-05-23 2019-08-20 泰康保险集团股份有限公司 Data processing method, device and electronic equipment
CN110688211A (en) * 2019-09-24 2020-01-14 四川新网银行股份有限公司 Distributed job scheduling method
CN111340147A (en) * 2020-05-22 2020-06-26 四川新网银行股份有限公司 Decision behavior generation method and system based on decision tree
CN112307126A (en) * 2020-11-24 2021-02-02 上海浦东发展银行股份有限公司 Batch processing method and system for credit card account management data
CN112463828A (en) * 2020-11-02 2021-03-09 马上消费金融股份有限公司 Data processing method, device, equipment, system and readable storage medium
CN113485812A (en) * 2021-07-23 2021-10-08 重庆富民银行股份有限公司 Partition parallel processing method and system based on large data volume task

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402423A (en) * 2010-09-19 2012-04-04 百度在线网络技术(北京)有限公司 Method and equipment for performing multi-task processing in network equipment
CN103902376A (en) * 2012-12-24 2014-07-02 北大方正集团有限公司 Task processing method and device for printing
CN105183901A (en) * 2015-09-30 2015-12-23 北京京东尚科信息技术有限公司 Method and device for reading database table through data query engine
CN105700958A (en) * 2016-01-07 2016-06-22 北京京东尚科信息技术有限公司 Method and system for automatic splitting of task and parallel execution of sub-task

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402423A (en) * 2010-09-19 2012-04-04 百度在线网络技术(北京)有限公司 Method and equipment for performing multi-task processing in network equipment
CN103902376A (en) * 2012-12-24 2014-07-02 北大方正集团有限公司 Task processing method and device for printing
CN105183901A (en) * 2015-09-30 2015-12-23 北京京东尚科信息技术有限公司 Method and device for reading database table through data query engine
CN105700958A (en) * 2016-01-07 2016-06-22 北京京东尚科信息技术有限公司 Method and system for automatic splitting of task and parallel execution of sub-task

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415768A (en) * 2017-02-09 2018-08-17 财付通支付科技有限公司 A kind of data batch processing method and system
CN108415768B (en) * 2017-02-09 2022-03-25 财付通支付科技有限公司 Data batch processing method and system
CN108230130A (en) * 2017-12-04 2018-06-29 阿里巴巴集团控股有限公司 Day cuts the method, apparatus and electronic equipment of data verification
CN108230130B (en) * 2017-12-04 2021-04-02 创新先进技术有限公司 Method and device for verifying daily cutting data and electronic equipment
CN109032796B (en) * 2018-07-18 2020-12-22 北京京东金融科技控股有限公司 Data processing method and device
CN109032796A (en) * 2018-07-18 2018-12-18 北京京东金融科技控股有限公司 A kind of data processing method and device
CN110147373A (en) * 2019-05-23 2019-08-20 泰康保险集团股份有限公司 Data processing method, device and electronic equipment
CN110147373B (en) * 2019-05-23 2021-06-22 泰康保险集团股份有限公司 Data processing method and device and electronic equipment
CN110688211A (en) * 2019-09-24 2020-01-14 四川新网银行股份有限公司 Distributed job scheduling method
CN111340147A (en) * 2020-05-22 2020-06-26 四川新网银行股份有限公司 Decision behavior generation method and system based on decision tree
CN112463828A (en) * 2020-11-02 2021-03-09 马上消费金融股份有限公司 Data processing method, device, equipment, system and readable storage medium
CN112307126A (en) * 2020-11-24 2021-02-02 上海浦东发展银行股份有限公司 Batch processing method and system for credit card account management data
CN112307126B (en) * 2020-11-24 2022-09-27 上海浦东发展银行股份有限公司 Batch processing method and system for credit card account management data
CN113485812A (en) * 2021-07-23 2021-10-08 重庆富民银行股份有限公司 Partition parallel processing method and system based on large data volume task
CN113485812B (en) * 2021-07-23 2023-12-12 重庆富民银行股份有限公司 Partition parallel processing method and system based on large-data-volume task

Similar Documents

Publication Publication Date Title
CN106293940A (en) A kind of method of parallel race batch in financial industry
CN105912387A (en) Method and device for dispatching data processing operation
US9525643B2 (en) Using templates to configure cloud resources
US8752059B2 (en) Computer data processing capacity planning using dependency relationships from a configuration management database
US20170220619A1 (en) Concurrency Control Method and Apparatus
DE102018003221A1 (en) Support of learned jump predictors
CN102929585B (en) A kind of batch processing method and system supporting the distributed data processing of many main frames
CN110362315B (en) DAG-based software system scheduling method and device
CN103699441B (en) The MapReduce report task executing method of task based access control granularity
CN112114973B (en) Data processing method and device
CN107316124B (en) Extensive affairs type job scheduling and processing general-purpose system under big data environment
CN113296905B (en) Scheduling method, scheduling device, electronic equipment, storage medium and software product
CN103678051B (en) A kind of online failure tolerant method in company-data processing system
US20160103708A1 (en) System and method for task execution in data processing
US20150172369A1 (en) Method and system for iterative pipeline
CN113886034A (en) Task scheduling method, system, electronic device and storage medium
CN106326005A (en) Automatic parameter tuning method for iterative MapReduce operation
CN106502842A (en) Data reconstruction method and system
US20200379998A1 (en) Systems and Methods for Determining Peak Memory Requirements in SQL Processing Engines with Concurrent Subtasks
Fernández et al. Reliably executing tasks in the presence of untrusted entities
CN113722357B (en) Data file verification method and device, electronic equipment and storage medium
CN108446165A (en) A kind of task forecasting method in cloud computing
CN103258009A (en) Method and system for acquiring and analyzing method performance data
CN107526648A (en) A kind of node device that handles is delayed the method and device of machine
CN106790536A (en) Composite Web services system of selection based on affairs and QoS

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104

RJ01 Rejection of invention patent application after publication