CN109522106B - Risk value simulation dynamic task scheduling method based on cooperative computing - Google Patents
Risk value simulation dynamic task scheduling method based on cooperative computing Download PDFInfo
- Publication number
- CN109522106B CN109522106B CN201811231253.4A CN201811231253A CN109522106B CN 109522106 B CN109522106 B CN 109522106B CN 201811231253 A CN201811231253 A CN 201811231253A CN 109522106 B CN109522106 B CN 109522106B
- Authority
- CN
- China
- Prior art keywords
- computing
- task
- simulation
- calculation
- risk value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a risk value simulation dynamic task scheduling method based on cooperative computing, which is characterized in that under a three-layer parallel computing frame consisting of core nodes, management nodes and computing nodes, the computing nodes divide tasks in a Monte Carlo simulation process and establish task queues; performing calculation task robbing branch through Pthread thread corresponding to the calculation equipment; and finally, the calculation results of the simulation process are subjected to parallel local sorting and then merged sorting by the calculation node, so that the load of a management process is reduced. The invention mainly adopts the idea of divide-and-conquer method, carries out dynamic task scheduling aiming at different computing equipment of CPU and MIC, realizes dynamic load balance in the computing process, and achieves the maximization of computing efficiency.
Description
Technical Field
The invention relates to the field of high-performance computing, in particular to a risk value simulation dynamic task scheduling method based on cooperative computing.
Background
The Monte Carlo simulation calculation method of the risk value simulates the fluctuation condition of the asset risk factor by generating a random number sequence distributed correspondingly. Due to the high computation consumption of the Monte Carlo simulation computation method of risk value, the optimization needs to be performed by using the parallel computation technology, and the effect of the dynamic scheduling technology greatly influences the final computation efficiency in consideration of the load problem of the parallel computation. With respect to the parallelization feasibility of the computing process of Monte Carlo simulation, through analysis, in the simulation process, no correlation exists between the kth simulation and the (k + 1) th simulation, and the asset valuation result of the time is independently generated in each independent simulation process, so that the computing process of the simulation process can be parallelized; the asset valuation result sorting process has a certain degree of correlation, but can also be parallelized by adjusting the sorting algorithm.
In the implementation of the parallelization technology, the situation that the utilization rate of computing resources is not high easily occurs due to uneven task allocation, and a key problem in consideration of the operation efficiency is how to implement load balancing. Dynamic schedulingIs an optimized method of dynamically determining the allocation ratio of the load according to the performance of the processing core during the execution of the task. Compared with static scheduling, dynamic scheduling has higher overhead, but more accurate prediction and higher utilization rate of computing resources. Rudolph et al propose an improved self-scheduling algorithm (GSS) in which, after a processor has completed a given task, the processor adjusts the size of the task to be divided to B c Where R is the remaining task amount, and Σ V represents the sum of the processing speeds of the processor. Kaleem et al propose a dynamic scheduling Algorithm (AHS) based on unified memory address space access, which analyzes load characteristics and execution rate of a processing core in real time. Belveinli et al propose a dynamic adaptive scheduling algorithm (DSS) which is characterized by a linear reduction in the amount of tasks allocated to the processing cores as the amount of remaining tasks decreases. By combining the advantages of the three methods, a Dynamic Task Scheduling Algorithm (DTSA) of a CPU-GPU heterogeneous multi-core system is proposed in Rician, et al in 2016, and the Algorithm utilizes a combined special thread Scheduling technology of a GPU to realize load balance of a processing core by accurately estimating the computing capacity of the CPU and the GPU and dynamically allocating tasks for computing resources according to the computing capacity and the residual iteration quantity.
The current dynamic scheduling method is more to dynamically determine the load of task execution according to the performance of the CPU and the GPU, and the load balancing problem for the MIC (integrated many-core architecture) with high performance is still worth researching.
Disclosure of Invention
In order to overcome at least one defect in the prior art, the invention provides a risk value simulation dynamic task scheduling method based on cooperative computing.
The present invention aims to solve the above technical problem at least to some extent.
The invention mainly aims to provide a risk value simulation dynamic task scheduling method based on cooperative computing.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a risk value simulation dynamic task scheduling method based on collaborative computing comprises the following steps:
s1, dividing a risk value Monte Carlo simulation task, generating a global simulation task queue and a global sequencing task queue, and designing a robbing branch and a computing capacity index for subsequent task allocation;
s2, distributing tasks for all the called CPUs and MICs in a three-layer computing frame;
s3, executing a task packet dynamic distribution method according to real-time computing capability among the computing devices;
and S4, finishing the calculation of all the nodes and outputting the result.
Furthermore, the three-layer computing framework is a three-layer parallel computing framework composed of core nodes, management nodes and computing nodes.
Further, the computing capacity index includes a computing capacity C and a real-time load rate L, where C represents the comprehensive computing capacity of all computing devices under the management node, and the expression is recorded as:
l represents the real-time load rate of all the computing devices under the management node, and the expression is expressed as:
wherein M is 0 Calculating the amount of task package last taken by computing device, T 0 And M is the calculated amount of the task package which is obtained by the computing equipment at this time, and T is the corresponding running time. Is M 0 And T 0 Setting an initial value such that when the initial value is taken, M 0 /T 0 Equal to the rated efficiency of the computing device.
Further, the computing node performs task assignment on each management node in the robbing branch according to C and L, firstly selects a node with a real-time load rate L lower than a threshold epsilon, then preferentially selects a node with the highest computing capacity C, and all nodes with real-time load rates lower than the threshold epsilon do not consider the load rate when selecting.
Furthermore, the calculation process of the risk value Monte Carlo simulation is that the estimation value of each independent moment is calculated firstly, then all the estimation values are sequenced, finally the K-th value is taken as the simulation result,
K=N×(1-α)
wherein, N is the total simulation times, and alpha is a random number between 0 and 1.
Further, dividing all operations of a calculation process of the risk value Monte Carlo simulation into two stages, wherein the first stage is an independent calculation thread with the same calculation mode, and adding merging and sequencing in the largest section; the second stage is merging and sorting with the maximum segment as the minimum unit; the calculation task of the first stage is defined as a global simulation task queue, and the calculation task of the second stage is defined as a global sequencing task queue.
Further, the global simulation task queue is directly generated by input data; a global ordering task queue is generated in the calculation process, and the global ordering task packet priority is D i Is provided with
Wherein i =1,2, \8230;, n,N D the number of the largest segments, the total number of simulations N in the calculation of the risk value Monte Carlo simulation is set to a certain value, N D Also fixed, when i =1,d 1 Is D i The maximum value of (1) is D 1 +1>D i Setting all section task packet priorities of global simulation task queue to be D 1 +1, the priority of the segmented task packet of the global simulation task queueIs the highest level.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention realizes dynamic load balance in the calculation process by performing dynamic task scheduling on different calculation devices of the CPU and the MIC, thereby maximizing the calculation efficiency.
Drawings
FIG. 1 is a schematic diagram of a computing framework of the method according to the embodiment of the present invention.
FIG. 2 is a diagram illustrating a global simulation task queue according to an embodiment of the present invention.
FIG. 3 is a diagram of a global ordering task queue according to an embodiment of the present invention.
FIG. 4 is a flowchart of a method according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described with reference to the drawings and the embodiments.
The invention provides a risk value simulation dynamic task scheduling method based on cooperative computing, which is based on the idea of a divide-and-conquer method, and a series of scheduling principles are formulated aiming at the characteristics of a CPU and an MIC so as to realize load balance in the computing process and further maximize the computing efficiency. Considering that the risk value Monte Carlo simulation calculation process is divided into two main parts of simulation and sequencing and has different calculation costs, different scheduling principles need to be adopted for the two calculation processes respectively.
The computing framework of the invention is composed of core nodes, management nodes and computing nodes, wherein the computing nodes are responsible for data input and output, the management nodes are responsible for butt joint of the computing nodes and the core nodes, and the core nodes are responsible for computing processes, and the computing framework is shown as a schematic diagram in fig. 1. Taking the 'Tianhe II' as an example, each physical computing node of the system has two computing resources, namely a CPU and an MIC. Aiming at the cooperative calling of the two computing resources, two multithreading parallelization technologies of Pthreads and OpenMP are used in the computing process. Taking a typical computing node composed of a CPU +3MIC as an example, there are 4 computing devices, 3MIC cards constitute 3 computing devices, and the CPU constitutes one computing device. Pthreads parallelism is mainly used for equipment level parallelism of a CPU +3MIC, and distribution of computing tasks between the CPU and the MIC is controlled; openMP is primarily used for thread-level parallelism on each computing device. Due to the difference of the computing power of the CPU and the MIC, the CPU is mainly responsible for scheduling related operations, and the MIC is mainly responsible for computing related operations.
One-time risk value Monte Carlo simulation calculation is defined as a task packet, and one task packet is issued to one computing node. The overall calculation process of the risk value Monte Carlo simulation is that the estimation value of each independent moment is calculated, then all the estimation values are sequenced, finally the K-th value is taken as the simulation result,
K=N×(1-α)
wherein, N is the total simulation times, and alpha is a random number between 0 and 1.
From an analysis of the overall calculation process, the calculation process can be divided into two processes: independent simulation process and result sorting process. According to the selection of the sorting algorithm, the operations contained in the two processes can be divided into two stages, wherein the first stage is a large number of independent computing threads with the same computing mode and is added with the merging sorting in the maximum segment; the second stage is the merge sort with the largest segment as the smallest unit. The calculation task of the first stage is defined as a global simulation task queue, and the calculation task of the second stage is defined as a global sequencing task queue. These two phases are now defined as two tasks that together form a global task queue. The global simulation task queue is directly generated by input data, and the structure is shown in FIG. 2; the global ordering task queue is generated in the calculation process, the structure is shown in fig. 3, each leaf node on the right side corresponds to each maximum segment in fig. 2, each node on the left side is a pair of same nodes on the right side, no corresponding left task queue is arranged on the root node layer, the priority of the global ordering task packet is D i Is provided with
Wherein i =1,2, \8230;, n-1,N D in the calculation of the risk value Monte Carlo simulation for the maximum number of segments, the total number of simulations N is set to a determined value, N D Also fixed, when i =1,d 1 Is D i The maximum value of (1) is D 1 +1>D i Setting all the section task packet priorities of the global simulation task queue to be D 1 +1, the priority of the segmented task packet of the global simulation task queue is always the highest level.
As shown in FIG. 4, when a management node finishes task assignment, the management node enters into a robbing branch and submits computing power indexes marked as C and L. Wherein C represents the aggregate computing capacity of all computing devices under the management node, L represents the real-time load rate of all computing devices under the management node,
wherein M is 0 Calculating the amount of task package last taken by computing device, T 0 And M is the calculated amount of the task package which is obtained by the computing equipment at this time, and T is the corresponding running time. Is M 0 And T 0 Setting an initial value such thatEqual to the rated efficiency of the computing device.
The task assignment rules shown in FIG. 4 are as follows:
(1) the computing node carries out global task queue dispatching, the dispatching is based on the priority of the segmented task packets, and the priority of all the segmented task packets of the global simulation task queue is D 1 +1, priority of sequencing task packet of global sequencing task queue is D i ;
(2) The assignment target of the segmented task packet is a management node in the robbing branch, and the assignment is carried out according to the computing power index provided when each management node enters the robbing branch;
(3) the management node calculates the acquired task package according to the real-time computing capacity of each computing deviceA specific assignment is made.
The specific implementation is as follows:
example 1
Now, setting a risk value Monte Carlo simulation to predict that 100 ten thousand simulation results are generated, and taking a median value as a simulation result, 1 computing unit of a computing platform used contains 1 CPU and 3 MICs, and the number of MIC cores is 61. At this time, the total number of simulations N is predicted 0 =1000000, the order of a =0.5, the standard size P of the fragmented task packet D =2 × 61=122, maximum number of segments
When a task is started, a computing node generates a global simulation task queue which comprises 1024 maximum segments, and the number of segment task packets of each segment is
The number of actual simulations
N=N P ×N D ×P D =999424
Loss rate
Assume that the computing framework is actually configured with 1 compute node, 8 management nodes per compute node, and 8 core nodes per management node. The specific calculation process is as follows:
1, the computing node dispatches the global simulation task queue, and at the moment, 8 management nodes are all in the robbing branch, so that each management node receives a maximum segment.
2, the management node calculates the efficiency of each segment task packet in the maximum segments according to each deviceAn assignment is made, this being the first assignment,indicating the nominal computational efficiency.
3, when the management node finishes the assignment, it enters into the robbing branch
And 4, the computing node performs task assignment on each management node in the robbing branch according to C and L, firstly, the nodes with the real-time load rate L lower than the threshold epsilon =0.25 are selected, then the nodes with higher computing capacity C are preferentially selected, and the load rate is not considered any more when all the nodes with the real-time load rate lower than the threshold are selected.
5, each core node calculates the simulation process of the taken segmented task packages and orders the calculation results of the segmented task packages, the size of a single segmented task package is 122, one core node is provided with 3 MICs, the core number of each MIC is 61, and the calculation results are calculated according to the real-time calculation capacityThe computation task allocation is performed and an ordered sequence of length 122 is returned to the management node.
And 6, after all results of one segmented task packet are returned, the management node starts to perform merging operation of 8 ordered sequences by using all core nodes under the management node, and after all steps of the merging operation are assigned, the management node enters a robbing branch.
7, according to the dispatching rule (1), the simulation calculation has the highest priority, the priority is gradually reduced from bottom to top in the sorting process, when the global simulation task queue is empty, the dispatching of the global sorting task queue is started, and the task quantity of each sorting task packet is different according to the hierarchy, for example, D 1 The task size of a layer is the maximum segment size N P ×P D ,D 2 Task amount of layer is N P ×P D ×2。
Claims (4)
1. A risk value simulation dynamic task scheduling method based on collaborative computing is characterized by comprising the following steps:
s1, dividing a risk value Monte Carlo simulation task, generating a global simulation task queue and a global sequencing task queue, and designing a robbery branch and a computing capacity index for subsequent task allocation;
s2, distributing tasks for all called CPUs and MICs in a three-layer computing frame;
s3, executing a task packet dynamic distribution method according to real-time computing capacity among the computing devices;
s4, finishing the calculation of all the calculation nodes and outputting results;
a once risk value Monte Carlo simulation calculation is defined as a task packet, one task packet is issued to a calculation node, the calculation process of the risk value Monte Carlo simulation calculation is that the estimation value of each independent moment is calculated, then all the estimation values are sequenced, and finally the first estimation value is takenThe values are taken as a result of the simulation,
wherein the content of the first and second substances,the number of the total simulation times is,a random number between 0 and 1;
dividing all operations of a calculation process of risk value Monte Carlo simulation calculation into two stages, wherein the first stage is an independent calculation thread with the same calculation mode, and adding merging and sequencing in the maximum segment; the second stage is merging and sorting with the maximum segment as the minimum unit; the calculation task of the first stage is defined as a global simulation task queue, and the calculation task of the second stage is defined as a global sequencing task queue;
the global simulation task queue is directly generated by input data; a global ordering task queue is generated in the calculation process, and the priority of a task packet in the global ordering task queue isIs provided with
Wherein, the first and the second end of the pipe are connected with each other,,,for the maximum number of segments, in the calculation of the risk value Monte Carlo simulation, the total number of simulations N is set to a determined value,is also fixed when,Is composed ofThe maximum value of (1) is constantSetting all the section task packet priority in the global simulation task queue to beThen the priority of the segmented task packet of the global simulation task queue is always the highest level.
2. The method for scheduling the risk value simulation dynamic tasks based on the cooperative computing as recited in claim 1, wherein the three-layer computing framework is a three-layer parallel computing framework composed of core nodes, management nodes and computing nodes.
3. The collaborative computing based risk value simulation dynamic task scheduling method according to any one of claims 1-2, wherein the computing capacity index includes a computing capacity C and a real-time load rate L, where C represents a comprehensive computing capacity of all computing devices under a management node, and an expression is expressed as:
and representing the real-time load rate of all computing devices under the management node, wherein the expression is expressed as:
wherein the content of the first and second substances,the amount of computation for the last retrieved task package by the computing device,is the time of the corresponding completion of the job,calculating the amount of the task package fetched by the computing device at this time,for corresponding run time, ofAndthe initial value is set so that, when the initial value is taken,equal to the rated efficiency of the computing device.
4.The collaborative computing-based risk value simulation dynamic task scheduling method according to any one of claims 1-2, wherein the computing nodes are based on respective management nodes in the robbed branchAnd withTask assignment is performed by first selecting a real-time load rateBelow thresholdThen selects the computing powerThe highest node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811231253.4A CN109522106B (en) | 2018-10-22 | 2018-10-22 | Risk value simulation dynamic task scheduling method based on cooperative computing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811231253.4A CN109522106B (en) | 2018-10-22 | 2018-10-22 | Risk value simulation dynamic task scheduling method based on cooperative computing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109522106A CN109522106A (en) | 2019-03-26 |
CN109522106B true CN109522106B (en) | 2023-01-17 |
Family
ID=65773009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811231253.4A Active CN109522106B (en) | 2018-10-22 | 2018-10-22 | Risk value simulation dynamic task scheduling method based on cooperative computing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109522106B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110262879B (en) * | 2019-05-17 | 2021-08-20 | 杭州电子科技大学 | Monte Carlo tree searching method based on balanced exploration and utilization |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103884343A (en) * | 2014-02-26 | 2014-06-25 | 海华电子企业(中国)有限公司 | Microwave integrated circuit (MIC) coprocessor-based whole-network shortest path planning parallelization method |
CN104123190A (en) * | 2014-07-23 | 2014-10-29 | 浪潮(北京)电子信息产业有限公司 | Load balance method and device of heterogeneous cluster system |
CN104331641A (en) * | 2014-10-11 | 2015-02-04 | 华中科技大学 | Fluorescent Monte-Carlo simulation method based on cluster-type GPU (Graphic Processing Unit) acceleration |
CN104680339A (en) * | 2015-03-26 | 2015-06-03 | 中国地质大学(武汉) | Household appliance scheduling method based on real-time electricity price |
CN104834556A (en) * | 2015-04-26 | 2015-08-12 | 西北工业大学 | Mapping method for multimode real-time tasks and multimode computing resources |
CN106970836A (en) * | 2017-03-20 | 2017-07-21 | 联想(北京)有限公司 | The method and system of execution task |
CN107087019A (en) * | 2017-03-14 | 2017-08-22 | 西安电子科技大学 | A kind of end cloud cooperated computing framework and task scheduling apparatus and method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8959525B2 (en) * | 2009-10-28 | 2015-02-17 | International Business Machines Corporation | Systems and methods for affinity driven distributed scheduling of parallel computations |
US9342368B2 (en) * | 2010-08-31 | 2016-05-17 | International Business Machines Corporation | Modular cloud computing system |
US10152240B2 (en) * | 2016-10-31 | 2018-12-11 | Chicago Mercantile Exchange Inc. | Resource allocation based on transaction processor classification |
-
2018
- 2018-10-22 CN CN201811231253.4A patent/CN109522106B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103884343A (en) * | 2014-02-26 | 2014-06-25 | 海华电子企业(中国)有限公司 | Microwave integrated circuit (MIC) coprocessor-based whole-network shortest path planning parallelization method |
CN104123190A (en) * | 2014-07-23 | 2014-10-29 | 浪潮(北京)电子信息产业有限公司 | Load balance method and device of heterogeneous cluster system |
CN104331641A (en) * | 2014-10-11 | 2015-02-04 | 华中科技大学 | Fluorescent Monte-Carlo simulation method based on cluster-type GPU (Graphic Processing Unit) acceleration |
CN104680339A (en) * | 2015-03-26 | 2015-06-03 | 中国地质大学(武汉) | Household appliance scheduling method based on real-time electricity price |
CN104834556A (en) * | 2015-04-26 | 2015-08-12 | 西北工业大学 | Mapping method for multimode real-time tasks and multimode computing resources |
CN107087019A (en) * | 2017-03-14 | 2017-08-22 | 西安电子科技大学 | A kind of end cloud cooperated computing framework and task scheduling apparatus and method |
CN106970836A (en) * | 2017-03-20 | 2017-07-21 | 联想(北京)有限公司 | The method and system of execution task |
Non-Patent Citations (1)
Title |
---|
基于网格任务调度的Monte Carlo仿真建模;刘晓明 等;《吉林大学学报(信息科学版)》;20070228;第25卷(第1期);第116页-第120页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109522106A (en) | 2019-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104781786B (en) | Use the selection logic of delay reconstruction program order | |
CN103729246B (en) | Method and device for dispatching tasks | |
CN107357652B (en) | Cloud computing task scheduling method based on segmentation ordering and standard deviation adjustment factor | |
CN113553103B (en) | Multi-core parallel scheduling method based on CPU + GPU heterogeneous processing platform | |
Li et al. | An effective scheduling strategy based on hypergraph partition in geographically distributed datacenters | |
CN103473120A (en) | Acceleration-factor-based multi-core real-time system task partitioning method | |
CN116501505B (en) | Method, device, equipment and medium for generating data stream of load task | |
CN115237580B (en) | Intelligent calculation-oriented flow parallel training self-adaptive adjustment system and method | |
CN103677960A (en) | Game resetting method for virtual machines capable of controlling energy consumption | |
CN114579284A (en) | Task scheduling method and device | |
CN115168058A (en) | Thread load balancing method, device, equipment and storage medium | |
CN104598304A (en) | Dispatch method and device used in operation execution | |
CN116048721A (en) | Task allocation method and device for GPU cluster, electronic equipment and medium | |
CN109522106B (en) | Risk value simulation dynamic task scheduling method based on cooperative computing | |
CN106547607A (en) | A kind of dynamic migration of virtual machine method and apparatus | |
CN108132840A (en) | Resource regulating method and device in a kind of distributed system | |
CN110262896A (en) | A kind of data processing accelerated method towards Spark system | |
CN112862083B (en) | Deep neural network inference method and device in edge environment | |
CN114675953A (en) | Resource dynamic scheduling method, device, equipment and computer readable storage medium | |
Fard et al. | Budget-constrained resource provisioning for scientific applications in clouds | |
WO2021115082A1 (en) | Job scheduling method and job scheduling apparatus | |
CN116166396A (en) | Training method and device of scheduling model, electronic equipment and readable storage medium | |
CN110958192A (en) | Virtual data center resource allocation system and method based on virtual switch | |
Salmani et al. | A fuzzy-based multi-criteria scheduler for uniform multiprocessor real-time systems | |
Gao et al. | Minimizing financial cost of scientific workflows under deadline constraints in multi-cloud environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |