CN112817724A - Task allocation method capable of dynamically arranging sequence - Google Patents

Task allocation method capable of dynamically arranging sequence Download PDF

Info

Publication number
CN112817724A
CN112817724A CN202110162576.8A CN202110162576A CN112817724A CN 112817724 A CN112817724 A CN 112817724A CN 202110162576 A CN202110162576 A CN 202110162576A CN 112817724 A CN112817724 A CN 112817724A
Authority
CN
China
Prior art keywords
task
node
processed
data
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110162576.8A
Other languages
Chinese (zh)
Inventor
杨浩源
魏飞
陈开颜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Hufangde Information Technology Co ltd
Original Assignee
Suzhou Hufangde Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Hufangde Information Technology Co ltd filed Critical Suzhou Hufangde Information Technology Co ltd
Priority to CN202110162576.8A priority Critical patent/CN112817724A/en
Publication of CN112817724A publication Critical patent/CN112817724A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The invention discloses a task allocation method capable of dynamically arranging sequences, which comprises the following steps: distributing task codes to each node task, storing the task codes in a task arrangement table, sequencing the task codes of the node tasks according to a circulation sequence by the task arrangement table, distributing an ID number to each task code, and taking the task code of a previous node task in the circulation sequence as a parent task code of a next node task; when the node tasks are processed, the parent task codes where the node tasks are located are inquired from the task arrangement table, the node tasks corresponding to the parent task codes are processed, and after the node tasks corresponding to the parent task codes are processed, the node tasks corresponding to the task codes are processed. The invention can dynamically arrange the node tasks, conveniently and quickly adjust the task allocation sequence, and greatly improves the task adjustment and allocation efficiency, thereby improving the processing efficiency of the whole service chain.

Description

Task allocation method capable of dynamically arranging sequence
Technical Field
The invention relates to the technical field of data processing, in particular to a task allocation method capable of dynamically arranging a sequence.
Background
With the development of information technology, the processing of long service chain is more and more common, for example, after a customer places an order, the long service chain needs to be processed by order inventory check, order combination and delivery according to the order of a receiving place determining delivery warehouse, the order of the same address is processed by order printing and sorting, packing, weighing and delivery (express delivery), a task node in the service chain is nested with a task node, and a series of state changes all involve the marking of the state of the order by a program and the processing of a subsequent program after the marking is completed. When processing tasks of such a long service chain, an order table is usually prepared, and the order table stores flag bits processed by each program, and whether the own program processes the data is judged by judging the flag bits of the front nodes, for example, a closing program processes the data by querying the data of which the delivery warehouse has been determined and the data which is not subjected to closing. However, the conventional processing method is inconvenient to adjust the task allocation sequence, and the task processing sequence can be adjusted only by changing the processing program each time, for example, the task a is processed by the processing program a ', the task B is processed by the processing program B', and the task allocation sequence can be adjusted only by adjusting the sequence of the processing programs a 'and B' to adjust the processing sequence of the service a-B to B-a, which is inconvenient to adjust and reduces the processing efficiency of the whole service chain.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a task allocation method capable of dynamically arranging a sequence, which can quickly adjust the task allocation sequence and is beneficial to improving the processing efficiency of the whole service chain.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a task allocation method capable of dynamically arranging sequence comprises the following steps:
distributing task codes to each node task, storing the task codes in a task arrangement table, sequencing the task codes of the node tasks according to a circulation sequence by the task arrangement table, distributing an ID number to each task code, and taking the task code of the previous node task in the circulation sequence as a parent-level task code of the next node task;
when the node tasks are processed, the parent task codes where the node tasks are located are inquired from the task arrangement table, the node tasks corresponding to the parent task codes are processed, and after the node tasks corresponding to the parent task codes are processed, the node tasks corresponding to the task codes are processed.
In one embodiment, the ID of each node task and the data to be processed by the node task are stored in a database table, the data to be processed by different node tasks in the database table needs to be filled in different queues to be processed at regular time, and when the node task is processed, the data to be processed by the node task is taken out from the corresponding queue to be processed.
In one embodiment, the method for filling the data required to be processed by different node tasks in the database table into different queues to be processed at regular time includes: and traversing all the node tasks with the parent-level task codes in the task arrangement table at regular time, judging whether the data in the task queue to be processed corresponding to the node tasks with the parent-level task codes is smaller than a storage threshold value, if so, extracting the data required to be processed by the corresponding node tasks from the database table and filling the data in the task queue to be processed until the storage threshold value of the task queue to be processed is reached, and if not, filling the data.
In one embodiment, each time a piece of data in a node task is taken from a queue to be processed, the taking-away condition of the data in the taken task queue needs to be stored and recorded, and the taking-away condition comprises the taking-away time.
In one embodiment, after each piece of data in each node task is processed, a data storage interface is called to store a processing result into a database table.
In one embodiment, if the data saving interface can be successfully called after the data processing in the node task is completed, deleting the storage record of the data removal condition in the removed task queue; if the data storage interface can not be called, re-enqueuing the data in the queue to be processed corresponding to the node task for retry, recording the retry times in a database table, and if the retry times exceed the set retry times, not processing the data.
In one embodiment, if the data saving interface cannot be called, it is further determined whether the removal time of the piece of data in the taken task queue exceeds the waiting set time, if so, the piece of data is re-enqueued in the to-be-processed queue corresponding to the node task for retry, the retry times are recorded in a database table, the storage record of the removal condition of the piece of data in the taken task queue is deleted, and if not, the data is temporarily not processed.
In one embodiment, if the retry number exceeds the set retry number, an alarm is also sent.
The invention has the following beneficial effects: the task allocation method capable of dynamically arranging the sequence can dynamically arrange the node tasks, conveniently and quickly adjust the task allocation sequence, and greatly improves the task adjustment and allocation efficiency, thereby improving the processing efficiency of the whole service chain.
Drawings
FIG. 1 is a block flow diagram of a dynamically orchestratable sequential task allocation method according to the present invention;
FIG. 2 is a flow chart of an e-commerce shipping service;
FIG. 3 is a flow diagram of a PDF document processing service before trimming;
FIG. 4 is a flow chart of a post-adjustment PDF file processing service;
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
As shown in fig. 1, the present embodiment discloses a task allocation method capable of dynamically arranging a sequence, which includes the following steps:
distributing task codes to each node task, storing the task codes in a task arrangement table, sequencing the task codes of the node tasks according to the flow sequence of the service by the task arrangement table, distributing an ID number to each task code, and taking the task code of the previous node task in the flow sequence as the parent task code of the next node task; namely in two adjacent node tasks in the circulation sequence, the task code of the previous node task is the parent task code of the next node task;
when the node tasks are processed, the parent task codes where the node tasks are located are inquired from the task arrangement table, the node tasks corresponding to the parent task codes are processed, and after the node tasks corresponding to the parent task codes are processed, the node tasks corresponding to the task codes are processed.
The task codes and the parent-level task codes are arranged in the task arrangement table to form a tree-shaped task structure, when the node task sequence needs to be changed or new node tasks need to be added, only the task codes and the parent-level task codes in the task arrangement table need to be adjusted, and the task processing program does not need to be modified and adjusted, so that dynamic arrangement is realized, the task adjustment speed and efficiency are greatly improved, and the processing efficiency of the whole service chain is improved.
For example, referring to fig. 2, the task scheduling table shown in table 1 is created according to the circulation order of the e-commerce shipping service (create order-check inventory-determine delivery warehouse-order-print sort order-package-print courier-label delivery):
table 1 task scheduling table for E-commerce delivery service (task array)
Figure BDA0002937148580000041
As can be seen from table 1, the "created order" is a root node task, and the other node tasks are allocated with parent node task codes in addition to the task codes, and when processing a node task, the parent task code where the node task is located is first queried from the task scheduling table and the node task corresponding to the parent task code is processed, and when the node task corresponding to the parent task code is processed, the node task corresponding to the task code is processed. For example, to process the task of "checking inventory", it is necessary to process the node task of "creating order" corresponding to the parent task code-create order, and after the node is processed, the node task of "checking inventory" is processed.
In one embodiment, the ID of each node task and the data to be processed by the node task are stored in a database table, and the data to be processed by different node tasks in the database table needs to be filled in different queues to be processed at regular time, that is, each node task corresponds to one queue to be processed for storing the task data, and when the node task is processed, the data to be processed by the node task is taken from the corresponding queue to be processed, and then the data is processed by a task processing program. By setting the queue to be processed, the classification and caching of task data are facilitated, the data are convenient to take, the data taken by different instances of the same program are different, and a plurality of machines can be conveniently deployed for processing, namely, the data taken by different instances are isolated because the queue to be processed controls the data, and the same data cannot be processed if a plurality of instances are started; meanwhile, the queue to be processed is filled regularly, and after a part of data in the queue to be processed is taken away for processing, the data can be filled in time, so that the queue to be processed can always keep higher utilization rate, and the ordered and efficient operation of the whole service chain is promoted.
In one embodiment, the method for filling the data required to be processed by different node tasks in the database table into different queues to be processed at regular time includes: and traversing all the node tasks with the parent-level task codes in the task arrangement table at regular time, judging whether the data in the task queue to be processed corresponding to the node tasks with the parent-level task codes is smaller than the storage threshold of the task queue to be processed, if so, extracting the data required to be processed by the corresponding node tasks from the database table and filling the data in the task queue to be processed until the storage threshold of the task queue to be processed is reached, and if not, indicating that the task queue to be processed has data processing without filling. That is, all other node tasks except the root node task in the task scheduling table need to be traversed at regular time, so that the to-be-processed task queue corresponding to the to-be-processed task scheduling table is filled at regular time.
It can be understood that the pending task queue has a certain storage capacity, and when the data required to be processed by the node task is larger than the pending task queue, the data cannot be stored and completed at one time, and successive filling is required. For example, assuming that the data amount of the processing data required by the node task of "checking inventory" is 300, and the storage threshold of the corresponding queue to be processed is only 200, the queue to be processed stores 200 at most once, and only after part of the data in the queue to be processed is taken away for processing, the remaining data required to be processed by the node task of "checking inventory" is gradually filled into the queue to be processed in batches.
In one embodiment, each time a piece of data in a node task is taken from a queue to be processed, the taking-away condition of the piece of data in the taken task queue needs to be stored and recorded, wherein the taking-away condition comprises the taking-away time and an ID number corresponding to the data. It can be understood that the fetched task queue and the pending queue correspond to each other one by one, and each dequeue data from the pending queue, a storage record is enqueued in the corresponding fetched task queue.
In one embodiment, after each piece of data in each node task is processed, a data storage interface is called to store a processing result into a database table. The processing result comprises processing success and processing failure, and in addition, the state of the node task to be processed is also stored in the database table.
Further, if the data storage interface can be successfully called after the data processing in the node task is completed, deleting the storage record of the data removal condition in the removed task queue; if the processing program corresponding to the node task is down or abnormal, and the task processing cannot be normally carried out, the data storage interface cannot be called, at this time, the data is re-queued in the queue to be processed corresponding to the node task for retry, the retry times are recorded in the database table, if the retry times exceed the set retry times, the data is not processed any more, and the data is not filled in the queue to be processed any more.
Further, if the data saving interface cannot be called, whether the taking time of the piece of data in the taken task queue exceeds the waiting set time needs to be judged, if so, the piece of data is re-queued in the to-be-processed queue corresponding to the node task for retry, the retry times are recorded in a database table, meanwhile, the storage record of the taking condition of the piece of data in the taken task queue is deleted, and if not, the data is not processed temporarily. For example, as shown in table 1, the waiting set time is 60s, and if the handler of the node task "check stock" is abnormal and stops operating, the node task "check stock" is taken away and then goes 60s later, the data retry number retry _ num with ID 2 is 1, and then the data in the node task "check stock" is refilled into the queue to be processed and reprocessed, and if the retry number is greater than the set retry number (3), the retry is stopped.
Through the setting of a database table (task record), processing flag bits of different programs do not need to be stored in a certain field of the database, more tasks and flag bits can be conveniently expanded, and the problem that in the prior art, the field expansion of a table is not facilitated due to the fact that the processing flag bits of different programs are recorded through the Mysql table and the increase of processing programs is caused is solved.
In one embodiment, if the retry number exceeds the set retry number, an alarm prompt is also performed, so as to facilitate manual examination in time.
In one embodiment, the processing programs of the tasks of the nodes are communicated through an Http interface, so that the problem of data communication among different programming languages is solved, and the complexity of the system is reduced.
As shown in fig. 3, the adjustment of the processing sequence of the above-described task allocation method capable of dynamically arranging the sequence will be specifically described below with reference to the processing traffic of the PDF file.
First, the task arrangement table before adjustment shown in table 2 is established according to the flow sequence in the flowchart shown in fig. 3:
table 2 task arrangement table (task array) for processing a PDF file before adjustment
Figure BDA0002937148580000071
If a node task of "removing duplicate of a PDF file" is to be added to the flowchart shown in fig. 3, that is, if the flowchart shown in fig. 3 is adjusted to the flowchart shown in fig. 4, only the parent task code in table 2 needs to be adjusted, and the adjusted task scheduling table is shown in table 3:
table 3 task arrangement table (task array) for processing service of PDF file adjusted
Figure BDA0002937148580000072
As can be seen from tables 2 to 3, to adjust the node tasks of the PDF file processing service, only the task scheduling table needs to be adjusted, and the processing programs of the PDF node services do not need to be changed.
In addition, as can be seen from table 3, there are two node tasks with parent task code-download PDF, which are "PDF to WORD" and "PDF extracted text", respectively, and the task codes thereof are "PDF to WORD" and "extract PDF text", respectively, and when a node task is processed, it is necessary to process "PDF to WORD" and "PDF extracted text" only after the node task of "download PDF file" corresponding to the parent task code-download PDF is processed.
The task allocation method capable of dynamically arranging the sequence can dynamically arrange the node tasks, conveniently and quickly adjust the task allocation sequence, and greatly improves the task adjustment and allocation efficiency, thereby improving the processing efficiency of the whole service chain.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above examples are merely illustrative for clarity and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (8)

1. A task allocation method capable of dynamically arranging sequence is characterized by comprising the following steps:
distributing task codes to each node task, storing the task codes in a task arrangement table, sequencing the task codes of the node tasks according to a circulation sequence by the task arrangement table, distributing an ID number to each task code, and taking the task code of the previous node task in the circulation sequence as a parent-level task code of the next node task;
when the node tasks are processed, the parent task codes where the node tasks are located are inquired from the task arrangement table, the node tasks corresponding to the parent task codes are processed, and after the node tasks corresponding to the parent task codes are processed, the node tasks corresponding to the task codes are processed.
2. The method for distributing tasks with sequences dynamically orchestratable according to claim 1, wherein the ID of each node task and the data required to be processed by the node task are stored in a database table, the data required to be processed by different node tasks in the database table needs to be filled in different queues to be processed at regular time, and when a node task is processed, the data required to be processed by the node task is taken out from the corresponding queue to be processed for processing.
3. The method for distributing tasks with dynamically orchestratable sequence according to claim 2, wherein the method for filling the data required to be processed by the tasks at different nodes in the database table into different queues to be processed at regular time comprises: and traversing all the node tasks with the parent-level task codes in the task arrangement table at regular time, judging whether the data in the task queue to be processed corresponding to the node tasks with the parent-level task codes is smaller than a storage threshold value, if so, extracting the data required to be processed by the corresponding node tasks from the database table and filling the data in the task queue to be processed until the storage threshold value of the task queue to be processed is reached, and if not, filling the data.
4. The method for distributing tasks with sequences dynamically orchestratable according to claim 2, wherein each time a piece of data in a node task is taken from the queue to be processed, a storage record is further required for taking-off of the data in the taken task queue, and the taking-off comprises a taking-off time.
5. The method for distributing tasks with sequences dynamically orchestratable according to claim 4, wherein each data in each node task needs to call a save data interface after processing, and the processing result is saved in a database table.
6. The method according to claim 5, wherein if the data processing in the node task is completed and the save data interface can be successfully invoked, the storage record of the removal of the data in the removed task queue is deleted; if the data storage interface can not be called, re-enqueuing the data in the queue to be processed corresponding to the node task for retry, recording the retry times in a database table, and if the retry times exceed the set retry times, not processing the data.
7. The method as claimed in claim 6, wherein if the data saving interface is not called, it is further determined whether the removal time of the piece of data in the taken task queue exceeds the waiting time, if so, the piece of data is re-queued in the pending queue corresponding to the node task for retry, the retry times are recorded in the database table, the storage record of the removal condition of the piece of data in the taken task queue is deleted, and if not, the piece of data is temporarily not processed.
8. The method for task allocation with dynamic scheduling of sequences according to claim 6, wherein if the number of retries exceeds the set number of retries, an alarm prompt is further performed.
CN202110162576.8A 2021-02-05 2021-02-05 Task allocation method capable of dynamically arranging sequence Pending CN112817724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110162576.8A CN112817724A (en) 2021-02-05 2021-02-05 Task allocation method capable of dynamically arranging sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110162576.8A CN112817724A (en) 2021-02-05 2021-02-05 Task allocation method capable of dynamically arranging sequence

Publications (1)

Publication Number Publication Date
CN112817724A true CN112817724A (en) 2021-05-18

Family

ID=75861873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110162576.8A Pending CN112817724A (en) 2021-02-05 2021-02-05 Task allocation method capable of dynamically arranging sequence

Country Status (1)

Country Link
CN (1) CN112817724A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467532A (en) * 2010-11-12 2012-05-23 中国移动通信集团山东有限公司 Task processing method and task processing device
US20140237452A1 (en) * 2013-02-15 2014-08-21 Microsoft Corporation Call Stacks for Asynchronous Programs
WO2018121738A1 (en) * 2016-12-30 2018-07-05 北京奇虎科技有限公司 Method and apparatus for processing streaming data task
US20180276031A1 (en) * 2015-09-15 2018-09-27 Alibaba Group Holding Limited Task allocation method and system
US20200110676A1 (en) * 2018-10-08 2020-04-09 Hewlett Packard Enterprise Development Lp Programming model and framework for providing resilient parallel tasks
CN112035235A (en) * 2020-09-02 2020-12-04 中国平安人寿保险股份有限公司 Task scheduling method, system, device and storage medium
CN112114971A (en) * 2020-09-28 2020-12-22 中国建设银行股份有限公司 Task allocation method, device and equipment
CN112181621A (en) * 2020-09-27 2021-01-05 中国建设银行股份有限公司 Task scheduling system, method, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467532A (en) * 2010-11-12 2012-05-23 中国移动通信集团山东有限公司 Task processing method and task processing device
US20140237452A1 (en) * 2013-02-15 2014-08-21 Microsoft Corporation Call Stacks for Asynchronous Programs
US20180276031A1 (en) * 2015-09-15 2018-09-27 Alibaba Group Holding Limited Task allocation method and system
WO2018121738A1 (en) * 2016-12-30 2018-07-05 北京奇虎科技有限公司 Method and apparatus for processing streaming data task
US20200110676A1 (en) * 2018-10-08 2020-04-09 Hewlett Packard Enterprise Development Lp Programming model and framework for providing resilient parallel tasks
CN112035235A (en) * 2020-09-02 2020-12-04 中国平安人寿保险股份有限公司 Task scheduling method, system, device and storage medium
CN112181621A (en) * 2020-09-27 2021-01-05 中国建设银行股份有限公司 Task scheduling system, method, equipment and storage medium
CN112114971A (en) * 2020-09-28 2020-12-22 中国建设银行股份有限公司 Task allocation method, device and equipment

Similar Documents

Publication Publication Date Title
US8762604B2 (en) Managing buffer conditions through sorting
US11315133B2 (en) System and method for minimizing latency in data consumption system
US7593947B2 (en) System, method and program for grouping data update requests for efficient processing
CN111639044B (en) Method and device for supporting interrupt priority polling arbitration dispatching
CN107729135B (en) Method and device for parallel data processing in sequence
CN107608860B (en) Method, device and equipment for classified storage of error logs
CN109359060B (en) Data extraction method, device, computing equipment and computer storage medium
CN104063355A (en) Method for configuring server cluster and central configuration server
CN106325758A (en) Method and device for queue storage space management
CN111400005A (en) Data processing method and device and electronic equipment
CN108304272B (en) Data IO request processing method and device
CN101446906B (en) Dispatching method for multi-batch processing tasks and system thereof
CN110134646B (en) Knowledge platform service data storage and integration method and system
CN112817724A (en) Task allocation method capable of dynamically arranging sequence
CN112118297B (en) Control method, device, equipment and storage medium for delay message
CN113301009A (en) Method and device for processing sequence messages
CN110008382B (en) Method, system and equipment for determining TopN data
CN110955502B (en) Task scheduling method and device
CN108021448B (en) Kernel space optimization method and device
CN110688223B (en) Data processing method and related product
CN107783778A (en) A kind of method for updating increment of feature based value look-up table
CN107506375A (en) Date storage method and device
CN113656374A (en) Method and device for processing service message with attachment
JP5622049B2 (en) Batch processing system and batch processing method
CN111144509B (en) Method, device and computer for classifying system application programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination