CN109214132B - LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system - Google Patents

LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system Download PDF

Info

Publication number
CN109214132B
CN109214132B CN201811280764.5A CN201811280764A CN109214132B CN 109214132 B CN109214132 B CN 109214132B CN 201811280764 A CN201811280764 A CN 201811280764A CN 109214132 B CN109214132 B CN 109214132B
Authority
CN
China
Prior art keywords
task
processing
node
queue node
task queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811280764.5A
Other languages
Chinese (zh)
Other versions
CN109214132A (en
Inventor
贾长伟
王晓路
刘闻
张恒
刘佳
何漫
汪宏昇
董志明
谭亚新
范锐
王颖昕
褚厚斌
蔡斐华
郭晶
张丽晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Launch Vehicle Technology CALT
Original Assignee
China Academy of Launch Vehicle Technology CALT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Launch Vehicle Technology CALT filed Critical China Academy of Launch Vehicle Technology CALT
Priority to CN201811280764.5A priority Critical patent/CN109214132B/en
Publication of CN109214132A publication Critical patent/CN109214132A/en
Application granted granted Critical
Publication of CN109214132B publication Critical patent/CN109214132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Abstract

An LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system comprises an external interface, a task queue node, a task processing node and a data center. The external interface receives external parent task and resource data, feeds the external parent task and resource data back to the task queue node, receives the task processing result fed back by the task queue node and issues the task processing result outwards. The task queue node is responsible for splitting and merging tasks, distributing the tasks to the task processing nodes and storing resource data to the data center; processing results from the task processing nodes and feeding back the processed results to the external interface. The task processing node respectively acquires task and resource data from the task queue node and the data center, processes the task, and feeds back a processing result and a new task generated in the processing process to the task queue node. The data center stores resource data. The invention reduces the delay of the simulation system, improves the communication efficiency and realizes LVC integrated high-efficiency simulation.

Description

LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system
Technical Field
The invention relates to a non-coupling flow type large-flux asynchronous task processing system oriented to LVC simulation, which adopts a distributed and flow type processing mechanism to loosely couple task processing and data and fully considers load balancing among nodes, and belongs to the field of simulation.
Background
LVC integrated simulation refers to simulation of three types of real installation, simulator and virtual force in a simulation system. The actual installation means that a real person uses actual equipment to perform actual application. The simulator refers to a real human-operated simulation system, and often appears as a simulated training system of a human-in-loop. The virtual force is an exponential simulation system, which is a deduction analysis tool.
At present, a plurality of researches are developed on LVC integrated simulation technology at home and abroad, in recent years, a foreign military product test verification mode is developed from a mode mainly based on physical tests to a virtual-real combined comprehensive test verification direction, a series of theoretical and practical achievements are obtained in the aspects of systemization, intellectualization, networking, standardization and the like of the military product comprehensive test, and a series of standard specifications including a high-level architecture (HLA), a Basic Object Model (BOM), a test and training enabling architecture (TENA), a Model Driving Architecture (MDA) and the like are formed. The invention provides a non-coupling flow type large-flux asynchronous task processing method for LVC simulation, which relates to a large number of real-time, simulator and mathematical models, and has the advantages that each simulation step is frequent, the calculated amount is large, the data amount is large, the delay of a simulation system is increased, and the communication efficiency is reduced.
Disclosure of Invention
The invention solves the technical problems that: the non-coupling flow type large-flux asynchronous task processing system for LVC simulation is provided, the delay of the simulation system is reduced, the communication efficiency is improved, and LVC integrated high-efficiency simulation is realized.
The technical scheme of the invention is as follows:
the non-coupling flow type large-flux asynchronous task processing system for LVC simulation comprises an external interface, a task queue node, a task processing node and a data center;
an external interface: receiving external parent task and resource data, feeding the external parent task and resource data back to the task queue node according to the request of the task queue node, receiving the task processing result fed back by the task queue node and publishing the task processing result outwards;
task queue node: storing task and resource data, which are responsible for splitting and merging tasks, distributing the tasks to task processing nodes, and storing the resource data to a data center; processing the processing result from the task processing node and feeding back to the external interface; the tasks comprise external parent tasks and new tasks generated by task processing nodes;
task processing nodes: respectively acquiring task and resource data from a task queue node and a data center, processing the task, and feeding back a processing result and a new task generated in the processing process to the task queue node;
and (3) a data center: the resource data is stored.
The task queue nodes comprise a plurality of task queue nodes, and each task queue node judges whether to send a request to the external interface according to the current load condition.
The task queue node numbers the acquired task and resource data to obtain a task number and a resource index number, and then stores the task number, the resource index number, whether subtasks exist and whether subtasks are registered in a task queue list according to the task number, the resource index number and the subtasks, wherein the task number and the resource index number are in one-to-one correspondence.
The subtasks are registered as a separate list;
when a task queue node receives a task, judging whether task splitting is needed according to the complexity of the task and the consumed resources;
when the task does not need to be split, the subtasks are registered as empty tables;
when the task needs to be split, the task numbers, the resource index numbers and whether all the subtasks contained in the task are completed or not are stored in the subtask registration list.
The task queue node processes the processing result of the task processing node according to the following mode:
when the original task is split, after all the subtasks are processed, the task queue node is responsible for merging the processing results of all the subtasks to obtain the processing result of the original task;
when the original task is not split, the processing result of the corresponding task processing node is the processing result of the original task.
When receiving a task, the task queue node stores the task number, the resource index number and resource data required for completing the task into a data center.
When the task queue node distributes the task to the task processing node, the task number and the resource index number of the task are simultaneously sent to the task processing node.
The task processing nodes are multiple, each task processing node submits an application to a task queue node, acquires a task from the task queue node, acquires resource data needed by processing the task from a data center according to a resource index number, and performs calculation processing of the task;
when the task calculation is completed and a new task which is not generated in the processing process is processed, feeding back a processing result to a task queue node;
when the task calculation is completed and a new task is generated in the processing process, feeding back a processing result, the new task generated in the processing process and resource data required by the new task to a task queue node;
when the task calculation is not completed, no feedback is provided.
After the task processing node acquires the task from the task queue node, the task processing node needs to maintain a task processing list, and the task processing list comprises: task number, resource index number, whether to generate new task, wherein the task number and the resource index number are directly obtained from the task queue node.
The task processing node submits application to the task queue node and simultaneously needs to provide software and hardware information of the task processing node to the task queue node, wherein the software and hardware information comprises a computer display card, a CPU (Central processing Unit) utilization rate, a memory utilization rate and throughput.
The task queue node distributes tasks to the task processing nodes in the following manner: the task queue node periodically sorts the task processing nodes applying for the calculation tasks according to the priority of the software and hardware information, and distributes tasks to the task processing nodes with high priority, so that the load balance of the task processing nodes is realized.
Compared with the prior art, the invention has the following advantages:
(1) The task queue node splits the tasks and distributes the tasks to a plurality of task processing nodes, so that distributed processing is realized, the calculation speed is greatly improved, the delay of a simulation system is reduced, the communication efficiency is improved, and the large-flux calculation tasks can be processed.
(2) The task queue node can distribute tasks to the task processing nodes according to the requests without the feedback results of the task processing nodes, so that the streaming processing of the tasks is realized, the communication efficiency is improved, and the LVC integrated high-efficiency simulation is realized.
(3) When the task queue node receives the task, the task and the resource data are correspondingly numbered, and the resource data are independently stored in the data center, so that loose coupling of the task and the resource data is realized, resource consumption of the node is reduced, and the calculation speed and the safety of the system are improved.
Drawings
FIG. 1 is a block diagram of the components of the present invention;
FIG. 2 is a task queue list structure diagram of the present invention;
fig. 3 is a diagram of a task processing list structure of the present invention.
Detailed Description
Before describing the embodiments of the present invention, technical terms used in the present invention will be described: the parent task refers to the most primitive task received from outside the system. The subtasks are a plurality of subtasks divided from a parent task in order to complete one task.
The invention will be further described with reference to the drawings and specific examples.
As shown in fig. 1, the non-coupled flow type large-flux asynchronous task processing system facing LVC simulation includes: the external interface receives external parent task and resource data from the outside, feeds back the external parent task and resource data to the task queue node according to the request of the task queue node, receives the task processing result fed back by the task queue node and issues the result outwards. The task queue node stores tasks (including external parent tasks and new tasks generated by the task processing nodes) and resource data, takes charge of splitting and merging the tasks, distributes the tasks to the task processing nodes, and stores the resource data to the data center; processing results from the task processing nodes and feeding back the processed results to the external interface. The task processing node respectively acquires task and resource data from the task queue node and the data center, processes the task, and feeds back a processing result and a new task generated in the processing process to the task queue node. The data center is used for storing data.
The external interface is an interface between the system and the outside and is responsible for processing interaction with the outside, receiving the mother task and the resource data required by operation from the outside of the system, feeding back the outside mother task and the resource data to the task queue node according to the request of the task queue node, receiving the task processing result fed back by the task queue node and issuing the task processing result outwards.
The task queue node stores tasks and resource data, is responsible for splitting and merging the tasks, distributes the tasks to the task processing nodes and stores the resource data to the data center; processing the processing result from the task processing node and feeding back to the external interface; the tasks include external parent tasks and new tasks generated by the task processing nodes.
The task queue nodes comprise a plurality of task queue nodes, and each task queue node judges whether to send a request to the external interface according to the current load condition.
And the task queue nodes periodically sort the priorities of the task processing nodes applying for the calculation tasks, and allocate tasks to the nodes with high priorities so as to realize node load balancing.
The task queue node stores external information and internal temporary information, the acquired task and resource data are numbered to obtain a task number and a resource index number, and then the task number and the resource index number are stored in a task queue list according to the task number, the resource index number, whether subtasks exist and whether subtasks are registered, wherein the task number and the resource index number are in one-to-one correspondence. As shown in fig. 2, the task numbers are numbered in the order of task reception, and in order to distinguish between the numbers of the parent task and the child task, the numbers of the parent task are all started by M, and the numbers of the child task are all started by Z. The resource index number is a string of numbers which are randomly generated in order to enable the task processing node to find corresponding resource data when processing tasks, and the task queue node stores the task number, the resource index number and the resource data required for completing the tasks into the data center when receiving the tasks. Whether there are subtasks is to mark whether the task is split into a plurality of subtasks. Whether a subtask is to flag whether the task is a subtask. The subtask registration is a single list, and in order to accurately and efficiently combine the tasks after the subtasks are processed, the task numbers, the resource index numbers and whether the subtasks contained in the parent task are completed or not are stored in the list. After adding the task to the task queue list, the task processing node can apply for the task to the task queue node, and meanwhile, the task queue node needs to provide the task number, the resource index number and the resource data for the data center.
The internal temporary information includes: (1) a task queue list to be allocated (the task queue list comprises three types, namely, a most original parent task acquired from an external interface, a plurality of child tasks which are split into the parent task for completing the calculation of the parent task, and a task newly generated by a task processing node when completing a certain task); (2) the task processing results (the task processing results comprise first, a master task processing result which is obtained from an external interface and does not need to be split and combined, and a new task cannot be generated in the processing process), wherein the results only need to be fed back to the external interface by a result calculated by a task processing node, second, the processing results of a plurality of subtasks split by the master task are fed back to the external interface after the result combination is carried out by a task queue node in order to finish the calculation of the master task, and third, the processing results of the task which is newly generated by the task processing node when a certain task is finished can be fed back to the external interface after the result processing is carried out by the task queue node).
The calculation of a parent task may be complex, so as to fully utilize the performance of the calculation node and improve the task processing efficiency, the task queue node may split a parent task into a plurality of subtasks after acquiring the task from the external interface, for example, to complete the processing of a parent task, the parent task needs to be split into two subtasks, and at this time, a list needs to be created in the task queue list for the two subtasks to store their information. The task processing node is responsible for subtask processing, and when the task processing is finished, the processing result is fed back to the task queue node, and after the task queue node receives the subtask processing result, the task queue node takes the position of a mark for judging whether the corresponding subtask stored in the task queue node is finished, and when all the subtasks are processed, the task queue node is responsible for combining all the subtask processing results into a result required by a parent task and finally feeds back to an external interface.
When responding to the task processing application of the task processing node, the task queue node analyzes according to the performance of the software and hardware of the node provided by the task processing node (such as a computer display card, CPU (central processing unit) utilization rate, memory utilization rate, throughput and the like), periodically prioritizes the task processing node applying for computing tasks, reasonably distributes the tasks, distributes the tasks to the nodes with high priorities, and realizes the load balance of the nodes.
The task processing node is in charge of acquiring tasks from the task queue node, acquiring resource data required by processing the tasks from the data center according to the task numbers and the resource index numbers, completing calculation processing of all the parent tasks and the child tasks, and feeding back processing results and new tasks generated in the processing process to the task queue node. The task processing node can be set into two states of busy and idle, and when the task processing node does not process, the task processing node is idle, and the task processing application can be performed at the moment; when a certain computing task is acquired from the task queue node, the state is set to be busy, and the task processing application is not performed. After acquiring the task from the task queue node, the task processing node also needs to maintain a task processing list, as shown in fig. 3, where the task processing list includes: the task number, the resource index number and whether to generate a new task are obtained from a task queue list, and whether to generate the new task refers to a new task generated for completing a certain task. The task processing node submits application to the task queue node and simultaneously needs to provide own software and hardware information (a computer display card, CPU utilization rate, memory utilization rate, throughput and the like) for the task queue node, so that the task queue node can reasonably distribute tasks.
After the task processing node obtains a new task from the task queue node, the task processing node obtains resource data required by completing the task from the data center according to the resource index number. When a task processing node processes a certain task, a new task may be generated, and the processing of the new task is the same as that of a parent task and needs to be added to a task queue node, and meanwhile, resource data required by the task processing needs to be stored in a data center. For example, when the data processing node needs to complete the original task and generates a new task in the process of completing the original task, the task processing node feeds back the new task to the task queue node, the task queue node adds the new task to the task queue list, and after the task processing node completes the new task, the results of completing the original task by the task queue node are combined to form a final processing result.
And the data center is responsible for storing the resource data and storing the data resources of all tasks into a database according to the task numbers, the resource index numbers and the resource data. In order to fully node database resources, when a task processing node extracts resource data of a certain task from the data center, the resource index number and the resource data corresponding to the task are deleted from the data center.
A parent task refers to a task received from outside the system that may be split into multiple child tasks that may create a new task.
In the invention, when a new task is added to a task queue node, resource data related to task processing is stored in a data center, and when a task is added to a task queue list, a resource index number and a task number are added; the task processing node acquires the task from the task queue, and acquires necessary data from the data center through the data index number when processing the task, a new task can be generated in the task processing process, the processing of the new task is the same as that of a parent task, and the index numbers of the parent task and the child task have certain correlation.
When the parent task is split into a plurality of subtasks, the subtask registration and the merging task registration need to be completed in order to complete the merging (result and data) of all the subtasks. All subtask numbers and resource index numbers are included in the registered merging task, and the task queue node needs to carry out special maintenance on the merging task, namely judging whether the subtask is completed or not.
The process flow of a parent task a is described in detail below:
(1) According to the request, an external parent task A is distributed to a task queue node by an external interface, the task queue node splits the A into two subtasks B and C, and the subtasks B and C are stored according to a table 1 by the task queue node. Wherein, the task number of A is M001, the corresponding resource index number is 01S23, the task number of subtask B is Z001, the resource index number is 11Z01, the task number of subtask C is Z002, and the resource index number is 11Z02. And simultaneously, the task queue node stores the task number, the resource index number and the resource data into a database.
TABLE 1 task queue List
Figure BDA0001847983740000081
(2) The task processing nodes in the idle state request task processing information from task queue nodes, and the task queue nodes respectively distribute subtasks B and C to different task processing nodes according to the performance of the task processing nodes in consideration of system load balancing.
(3) The task processing node that receives the task changes its status to "busy" while storing the task in the task processing list.
(4) The task processing nodes for processing the subtasks B and C find out the corresponding resource index numbers 11Z01 and 11Z02 according to the task numbers Z001 and Z002 of the subtasks B and C respectively, acquire the resource data required by the processing of the subtasks B and C from the data center, and perform the task processing of the subtasks B and C.
(5) If a new task D is generated when the subtask B is processed, the task processing node needs to feed back a processing result, the new task D and resource data needed by the processing D to the task queue node; the task queue node adds the new task D into the task queue list according to the format of the task queue list, modifies the list information of the subtask B (whether subtasks are modified from NO to YES, supplements the task number and the resource index number of the new task D into the subtask registry of the subtask B), and stores the resource data corresponding to the new task D into the data center.
(6) And (3) the task processing node and the task queue node process the new task D according to the steps (2), (3) and (4).
(7) After the new task D is processed, the task processing node feeds back a processing result to the task queue node, and meanwhile, the new task D is deleted from the task processing list, and the state is modified to be idle. The task queue node modifies whether the sub-task registry of the sub-task B in the task queue list is completed from NO to YES, combines the processing results of the sub-task B to form a final processing result for B, and then modifies whether the sub-task B in the sub-task registry of the parent task A in the task queue list is completed from NO to YES.
(8) When the subtask C is processed, the task processing node feeds back a processing result to the task queue node, and meanwhile, the subtask C is deleted from the task processing list, and the state is modified to be idle. The task queue node modifies whether the sub-task C in the sub-task registry of the parent task A in the task queue list is completed from NO to YES.
(9) The task queue node deletes the subtasks B and C from the task queue list, and combines the processing results of the subtasks B and C to form the processing result of the parent task A. And simultaneously, deleting the parent task A from the task queue list, feeding back the processing result to the external interface, and releasing the processing result of the parent task to the outside of the system by the external interface.
The non-coupling flow type large-flux asynchronous task processing system oriented to LVC simulation reduces delay of a simulation system, improves communication efficiency and realizes LVC integrated high-efficiency simulation.
What is not described in detail in the present specification is a well known technology to those skilled in the art.

Claims (8)

1. An LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system is characterized in that: the system comprises an external interface, a task queue node, a task processing node and a data center;
an external interface: receiving external parent task and resource data, feeding the external parent task and resource data back to the task queue node according to the request of the task queue node, receiving the task processing result fed back by the task queue node and publishing the task processing result outwards;
task queue node: storing task and resource data, which are responsible for splitting and merging tasks, distributing the tasks to task processing nodes, and storing the resource data to a data center; processing the processing result from the task processing node and feeding back to the external interface; the tasks comprise external parent tasks and new tasks generated by task processing nodes;
task processing nodes: respectively acquiring task and resource data from a task queue node and a data center, processing the task, and feeding back a processing result and a new task generated in the processing process to the task queue node;
and (3) a data center: storing the resource data;
the task queue nodes comprise a plurality of task queue nodes, and each task queue node judges whether to send a request to an external interface according to the current load condition;
the task queue node numbers the acquired task and resource data to obtain a task number and a resource index number, and then registers and stores the task number, the resource index number, whether subtasks exist and whether subtasks are in a task queue list according to the task number, the resource index number and the subtasks, wherein the task number corresponds to the resource index number one by one;
the subtasks are registered as a separate list;
when a task queue node receives a task, judging whether task splitting is needed according to the complexity of the task and the consumed resources;
when the task does not need to be split, the subtasks are registered as empty tables;
when the task needs to be split, the task numbers, the resource index numbers and whether all the subtasks contained in the task are completed or not are stored in the subtask registration list.
2. The LVC emulation-oriented uncoupled streaming high-throughput asynchronous task processing system of claim 1, wherein: the task queue node processes the processing result of the task processing node according to the following mode:
when the original task is split, after all the subtasks are processed, the task queue node is responsible for merging the processing results of all the subtasks to obtain the processing result of the original task;
when the original task is not split, the processing result of the corresponding task processing node is the processing result of the original task.
3. The LVC emulation-oriented uncoupled streaming high-throughput asynchronous task processing system of claim 1, wherein: when receiving a task, the task queue node stores the task number, the resource index number and resource data required for completing the task into a data center.
4. A non-coupled streaming high-throughput asynchronous task processing system oriented to LVC simulation according to claim 3, wherein: when the task queue node distributes the task to the task processing node, the task number and the resource index number of the task are simultaneously sent to the task processing node.
5. The LVC emulation-oriented uncoupled streaming high-throughput asynchronous task processing system of claim 4, wherein: the task processing nodes are multiple, each task processing node submits an application to a task queue node, acquires a task from the task queue node, acquires resource data needed by processing the task from a data center according to a resource index number, and performs calculation processing of the task;
when the task calculation is completed and a new task which is not generated in the processing process is processed, feeding back a processing result to a task queue node;
when the task calculation is completed and a new task is generated in the processing process, feeding back a processing result, the new task generated in the processing process and resource data required by the new task to a task queue node;
when the task calculation is not completed, no feedback is provided.
6. The LVC emulation oriented uncoupled streaming high-throughput asynchronous task processing system of claim 5, wherein: after the task processing node acquires the task from the task queue node, the task processing node needs to maintain a task processing list, and the task processing list comprises: task number, resource index number, whether to generate new task, wherein the task number and the resource index number are directly obtained from the task queue node.
7. The LVC emulation-oriented uncoupled streaming high-throughput asynchronous task processing system of claim 1, wherein: the task processing node submits application to the task queue node and simultaneously needs to provide software and hardware information of the task processing node to the task queue node, wherein the software and hardware information comprises a computer display card, a CPU (Central processing Unit) utilization rate, a memory utilization rate and throughput.
8. The LVC emulation oriented uncoupled streaming high-throughput asynchronous task processing system of claim 7, wherein: the task queue node distributes tasks to the task processing nodes in the following manner: the task queue node periodically sorts the task processing nodes applying for the calculation tasks according to the priority of the software and hardware information, and distributes tasks to the task processing nodes with high priority, so that the load balance of the task processing nodes is realized.
CN201811280764.5A 2018-10-30 2018-10-30 LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system Active CN109214132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811280764.5A CN109214132B (en) 2018-10-30 2018-10-30 LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811280764.5A CN109214132B (en) 2018-10-30 2018-10-30 LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system

Publications (2)

Publication Number Publication Date
CN109214132A CN109214132A (en) 2019-01-15
CN109214132B true CN109214132B (en) 2023-06-30

Family

ID=64997612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811280764.5A Active CN109214132B (en) 2018-10-30 2018-10-30 LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system

Country Status (1)

Country Link
CN (1) CN109214132B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731663A (en) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 Task processing method and system
CN106936899A (en) * 2017-02-25 2017-07-07 九次方大数据信息集团有限公司 The collocation method of distributed statistical analysis system and distributed statistical analysis system
CN107632890A (en) * 2017-08-10 2018-01-26 北京中科睿芯科技有限公司 Dynamic node distribution method and system in a kind of data stream architecture
CN107707592A (en) * 2017-01-24 2018-02-16 贵州白山云科技有限公司 Task processing method, node and content distributing network
CN107743246A (en) * 2017-01-24 2018-02-27 贵州白山云科技有限公司 Task processing method, system and data handling system
CN107766129A (en) * 2016-08-17 2018-03-06 北京金山云网络技术有限公司 A kind of task processing method, apparatus and system
CN107766572A (en) * 2017-11-13 2018-03-06 北京国信宏数科技有限责任公司 Distributed extraction and visual analysis method and system based on economic field data
WO2018121738A1 (en) * 2016-12-30 2018-07-05 北京奇虎科技有限公司 Method and apparatus for processing streaming data task

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731663A (en) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 Task processing method and system
CN107766129A (en) * 2016-08-17 2018-03-06 北京金山云网络技术有限公司 A kind of task processing method, apparatus and system
WO2018121738A1 (en) * 2016-12-30 2018-07-05 北京奇虎科技有限公司 Method and apparatus for processing streaming data task
CN107707592A (en) * 2017-01-24 2018-02-16 贵州白山云科技有限公司 Task processing method, node and content distributing network
CN107743246A (en) * 2017-01-24 2018-02-27 贵州白山云科技有限公司 Task processing method, system and data handling system
CN106936899A (en) * 2017-02-25 2017-07-07 九次方大数据信息集团有限公司 The collocation method of distributed statistical analysis system and distributed statistical analysis system
CN107632890A (en) * 2017-08-10 2018-01-26 北京中科睿芯科技有限公司 Dynamic node distribution method and system in a kind of data stream architecture
CN107766572A (en) * 2017-11-13 2018-03-06 北京国信宏数科技有限责任公司 Distributed extraction and visual analysis method and system based on economic field data

Also Published As

Publication number Publication date
CN109214132A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN107122243B (en) The method of Heterogeneous Cluster Environment and calculating CFD tasks for CFD simulation calculations
TWI547817B (en) Method, system and apparatus of planning resources for cluster computing architecture
CN107633125B (en) Simulation system parallelism identification method based on weighted directed graph
CN101652750B (en) Data processing device, distributed processing system and data processing method
US20090094605A1 (en) Method, system and program products for a dynamic, hierarchical reporting framework in a network job scheduler
CN108090731A (en) A kind of information processing method and equipment
CN103870314A (en) Method and system for simultaneously operating different types of virtual machines by single node
CN103401939A (en) Load balancing method adopting mixing scheduling strategy
CN112084015B (en) Cloud computing-based simulation cloud platform building system and method
CN110659110B (en) Block chain based distributed computing method and system
CN103729257A (en) Distributed parallel computing method and system
EP3000030A2 (en) Methods and apparatus for iterative nonspecific distributed runtime architecture and its application to cloud intelligence
CN104023062A (en) Heterogeneous computing-oriented hardware architecture of distributed big data system
Taylor et al. Integrating heterogeneous distributed COTS discrete-event simulation packages: an emerging standards-based approach
CN107463357A (en) Task scheduling system, dispatching method, Simulation of Brake system and emulation mode
CN116263701A (en) Computing power network task scheduling method and device, computer equipment and storage medium
CN103595654A (en) HQoS implementation method, device and network equipment based on multi-core CPUs
CN104965762B (en) A kind of scheduling system towards hybrid task
CN109214132B (en) LVC simulation-oriented uncoupled streaming type large-flux asynchronous task processing system
CN104156505A (en) Hadoop cluster job scheduling method and device on basis of user behavior analysis
CN111309488B (en) Method and system for sharing computing resources of unmanned aerial vehicle cluster and computer storage medium
CN104360962A (en) Multilevel nested data transmission method and system matched with high-performance computer structure
CN110502337B (en) Optimization system for shuffling stage in Hadoop MapReduce
CN116820714A (en) Scheduling method, device, equipment and storage medium of computing equipment
Mishra et al. A memory-aware dynamic job scheduling model in Grid computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant