CN116932174B - Dynamic resource scheduling method, device, terminal and medium for EDA simulation task - Google Patents

Dynamic resource scheduling method, device, terminal and medium for EDA simulation task Download PDF

Info

Publication number
CN116932174B
CN116932174B CN202311204519.7A CN202311204519A CN116932174B CN 116932174 B CN116932174 B CN 116932174B CN 202311204519 A CN202311204519 A CN 202311204519A CN 116932174 B CN116932174 B CN 116932174B
Authority
CN
China
Prior art keywords
model
simulation
error
prediction
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311204519.7A
Other languages
Chinese (zh)
Other versions
CN116932174A (en
Inventor
陈浩南
陈华
周旻
郁发新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202311204519.7A priority Critical patent/CN116932174B/en
Publication of CN116932174A publication Critical patent/CN116932174A/en
Application granted granted Critical
Publication of CN116932174B publication Critical patent/CN116932174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a dynamic resource scheduling method, a device, a terminal and a medium for EDA simulation tasks, which realize more reasonable resource allocation for EDA simulation tasks through two aspects of resource fluctuation prediction and corresponding resource dynamic adjustment, thereby improving the resource utilization rate of a system, solving the technical problem that the computing resource utilization rate of the original system is lower in a longer period (only about 50 percent and the fluctuation range is larger).

Description

Dynamic resource scheduling method, device, terminal and medium for EDA simulation task
Technical Field
The application relates to the technical field of EDA simulation design, in particular to a dynamic resource scheduling method, a device, a terminal and a medium for EDA simulation tasks.
Background
The EDA (Electronic Design Automation ) simulation verification task is a necessary task for confirming the rationality and correctness of the design after the designer completes the design links of the integrated circuit. In the current environment, the higher integration level, the higher working frequency and the smaller size are the continuous development direction of products. In which case. Engineers will make dimensional changes of several microns at the chip level or several centimeters at the module level at the design ring. This results in a significant increase in the demand and consumption of computing resources when performing corresponding large-scale simulation tasks, including electrical performance simulation, electromagnetic, structural, thermal simulation, and the like, and therefore task parallelization is often required to accelerate the tasks. A resource management system is typically needed to provide load balancing, with the primary responsibility of providing efficient resource isolation, resource sharing among multiple parallel or distributed applications, and positioning of computing tasks when they are mapped to system nodes. The resource management system will decide when and how to allocate the resources of a compute node to a particular application, which is commonly referred to as job scheduling.
However, the size of parallel tasks in a system is often diverse, and as large-scale concurrent tasks continue to enter the queue, the system can easily become fragmented. The idle resources can not meet the demands of tasks and have to wait for the release of other resources, so that the utilization rate of the system is low, long-time waiting of the operation occurs, and the completion time of simulation is seriously influenced. Meanwhile, the complex characteristics of the EDA simulation tasks determine that the computing resources required by each EDA simulation verification task are difficult to determine, if excessive resources are allocated, the system computing resources are wasted, and if the resources are not allocated enough, the simulation tasks are inefficient. And EDA simulation tasks have the characteristic of dynamic fluctuation of the required resources during operation, and the resource waste can be caused by the fact that fixed resource allocation cannot be matched with the dynamic characteristics.
At present, aiming at the problem of low parallel efficiency of a computing system, the following processing means are available:
1. a better scheduling algorithm is designed, and a more reasonable job queue is formed under the condition of large-scale concurrent tasks, so that the fragmentation phenomenon of the system is reduced;
2. the resources required by the tasks are estimated in advance, and more reasonable resource quantity is provided for the simulation tasks when the resources are allocated, so that resource waste is reduced.
However, the first method is to grasp more accurate task information including resources and time required by the task on the premise that a reasonable scheduling queue is formed according to a scheduling algorithm, and the resources required by the EDA simulation task are difficult to determine and the execution time varies correspondingly with the number of the resources, so that accurate scheduling is difficult. The second method predicts the resources required by the reasonable starting of the EDA simulation task in advance, however, the EDA simulation task has unique dynamic load fluctuation, namely the computing resources used in the running process are changed in different time periods, so that the fixed sufficient resources are distributed to the EDA task, and the phenomenon of resource waste is caused in some time periods.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide a method, an apparatus, a terminal, and a medium for scheduling dynamic resources of an EDA simulation task, which are used for solving the technical problem of poor dynamic resource allocation of the EDA simulation task.
To achieve the above and other related objects, a first aspect of the present application provides a dynamic resource scheduling method for EDA simulation tasks, including: setting a plurality of simulation prediction basic models for predicting computing resources required by EDA simulation tasks, and setting a corresponding weight adjustment module for each simulation prediction basic model; the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result; calculating the integrated prediction results of all the simulation prediction basic models according to the output data of each simulation prediction basic model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1; and adjusting EDA task resources on each computing node according to the integrated prediction result.
In some embodiments of the first aspect of the present application, the process of updating the model weights by the weight adjustment module includes: initializing the average value of the weight of each model; predicting the resource use condition of a preset task in the current period by using a simulation prediction basic model, and evaluating the feedback of each simulation prediction basic model to the environment based on an evaluation feedback algorithm; and updating the weight of each simulation prediction model according to the evaluation result, and accordingly determining the integrated prediction result of the resource use condition of the preset task in the next period.
In some embodiments of the first aspect of the present application, the evaluation feedback algorithm comprises: acquiring a first error value between a resource use predicted value and a resource use actual value in the previous period and a second error value between the resource use predicted value and the resource use actual value in the current period; and judging a performance evaluation feedback result of the simulation prediction basic model according to the comparison result of the first error value and the second error value, and adjusting model weight according to the performance evaluation feedback result so as to improve the model weight with good performance evaluation feedback.
In some embodiments of the first aspect of the present application, determining a performance evaluation feedback result of a simulation prediction base model according to a comparison result of the first error value and the second error value, and adjusting a model weight according to the performance evaluation feedback result to improve a model weight with good performance evaluation feedback, the method includes: if the first error is larger than the second error, determining that the prediction performance of the simulation prediction basic model is good, and correspondingly improving the model weight of the simulation prediction basic model; if the first error is equal to the second error, determining that the prediction performance of the simulation prediction basic model is unchanged, and not changing the model weight of the simulation prediction basic model; if the first error is smaller than the second error, determining that the prediction performance of the simulation prediction basic model is poor, and correspondingly reducing the model weight of the simulation prediction basic model.
In some embodiments of the first aspect of the present application, the calculating method of the first error value includes: calculating a corresponding first mean absolute error and a corresponding first root mean square error based on the predicted value of the resource use and the actual value of the resource use in the last period, and calculating a weighted sum of the first mean absolute error and the first root mean square error according to a preset proportion so as to obtain a first error value; and/or the calculating manner of the second error value comprises: and calculating a corresponding second mean absolute error and second root mean square error based on the resource use predicted value and the resource use actual value of the current period, and calculating a weighted sum of the second mean absolute error and the second root mean square error according to a preset proportion, so as to obtain the second error value.
In some embodiments of the first aspect of the present application, the integrated prediction result is expressed as follows:
wherein,representing an integrated prediction result of the next time period of the task i; />Representing a current time period; />Representing the weight of the jth predictive model on the ith task; />Representing model j at +.>Predicted resource usage at time.
In some embodiments of the first aspect of the present application, the process of adjusting EDA task resources comprises: for a plurality of running tasks, firstly, carrying out resource adjustment on tasks needing to reduce resources in the running tasks so as to release redundant resources and reckon with idle resources; and performing resource adjustment for the tasks needing to be added with resources in the running tasks, if the idle resources meet the needed added resources, correspondingly adding the resources for the corresponding tasks, otherwise, not performing resource adjustment.
In some embodiments of the first aspect of the present application, the simulated predictive base model includes a combination of any one or more of: a limit tree regression model, a K-nearest neighbor classification model, a linear regression model, a random forest regression model, an extreme gradient lifting model, a decision tree model, a multi-layer perceptron model, a support vector machine regression model, a bridge regression model, and a limited Boltzmann machine model.
To achieve the above and other related objects, a second aspect of the present application provides an EDA simulation task dynamic resource scheduling apparatus, including: the model and weight setting module is used for setting a plurality of simulation prediction basic models for predicting computing resources required by the EDA simulation task, and setting a corresponding weight adjustment module for each simulation prediction basic model; the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result; the integrated prediction module is used for calculating integrated prediction results of all the simulation prediction basic models according to the output data of each simulation prediction basic model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1; and the resource adjustment module is used for adjusting EDA task resources on each computing node according to the integrated prediction result.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the EDA simulation task dynamic resource scheduling method.
To achieve the above and other related objects, a fourth aspect of the present application provides an electronic terminal, comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the terminal executes the EDA simulation task dynamic resource scheduling method.
As described above, the EDA simulation task dynamic resource scheduling method, device, terminal and medium of the application have the following beneficial effects: the application realizes more reasonable resource allocation to EDA simulation tasks through two aspects of resource fluctuation prediction and corresponding resource dynamic adjustment, thereby improving the resource utilization rate of the system. Firstly, by monitoring the resource use condition of a task, taking the specific parameters of EDA simulation tasks such as the current resource utilization rate, initial resource allocation value, current/voltage source frequency and the like as input, outputting a prediction model to obtain a predicted resource demand, and according to the predicted resource demand, realizing real-time resource adjustment by changing the parameters of a resource control group in a system. The application optimizes the dynamic scheduling method of the resources, has higher utilization rate of the computing resources, and keeps more than 85 percent and smaller fluctuation range.
Drawings
FIG. 1 is a schematic diagram of load fluctuations during EDA simulation task operation in accordance with an embodiment of the present application.
Fig. 2 is a flow chart of a dynamic resource scheduling method for EDA simulation tasks according to an embodiment of the application.
Fig. 3 is a schematic structural diagram of a weight adjustment module for setting a corresponding weight for each of the simulation prediction base models according to an embodiment of the application.
FIG. 4 is a flow chart illustrating the adjustment of EDA task resources according to an embodiment of the present application.
FIG. 5 is a graph showing the comparison of the effects of static allocation and dynamic allocation in an embodiment of the present application.
FIG. 6 is a schematic diagram of computing resource utilization before modification in accordance with an embodiment of the present application.
FIG. 7 is a schematic diagram of improved computing resource utilization in accordance with an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an EDA simulation task dynamic resource scheduling device according to an embodiment of the application.
Fig. 9 is a schematic structural diagram of an electronic terminal according to an embodiment of the application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
In order to solve the problems in the background art, the application provides a dynamic resource scheduling method, a device, a terminal and a medium for EDA simulation tasks, which aim to train a deep learning model to predict the dynamic fluctuation of the load of the simulation tasks and adjust the resource allocation according to the task demands in real time during the task operation, thereby ensuring the utilization rate and the parallel efficiency of the computing resources of the system. In order to make the objects, technical solutions and advantages of the present application more apparent, further detailed description of the technical solutions in the embodiments of the present application will be given by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the invention provides an EDA simulation task dynamic resource scheduling method, a device of the EDA simulation task dynamic resource scheduling method and a storage medium for storing an executable program for realizing the EDA simulation task dynamic resource scheduling method. With respect to implementation of the EDA simulation task dynamic resource scheduling method, an exemplary implementation scenario of EDA simulation task dynamic resource scheduling will be described in the embodiments of the present invention.
As shown in fig. 1, a dynamic change schematic diagram of the load situation of a typical EDA circuit simulation task in a system is shown, the horizontal axis represents time, the vertical axis represents CPU core number and represents the CPU load situation, and it can be seen from the figure that in a section with time t being about (9, 15) min, the CPU load situation obviously fluctuates downwards, and in a section after time t being 22, the CPU load situation also obviously declines. Static scheduling strategies tend to ignore these dynamic characteristics, which makes it difficult for complex simulation systems to improve execution efficiency.
Therefore, the embodiment of the invention predicts the dynamic change of the resource demand of the simulation task based on the calculation resource information prediction model using the integrated learning, and dynamically schedules the calculation resource in real time based on the prediction result.
Fig. 2 is a schematic flow chart of an EDA simulation task dynamic resource scheduling method in an embodiment of the present invention. The EDA simulation task dynamic resource scheduling method in the embodiment mainly comprises the following steps:
step S21: setting a plurality of simulation prediction basic models for predicting computing resources required by EDA simulation tasks, and setting a corresponding weight adjustment module for each simulation prediction basic model; and the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result.
It should be noted that, most of the conventional prediction methods in the industry are implemented based on a single prediction model of machine learning technology (Machine Learning Techniques), and the prediction methods based on the prediction methods and learning thereof can predict task performances from different dimensions. These predictive methods may also predict performance, execution time, etc. while predicting future resource behavior. However, different learning training models may have their advantages and disadvantages in different directions, and it is often difficult to continue to perform well with a single machine learning model as the system scale is increased. Therefore, the simulation prediction of the embodiment of the invention adopts an integrated learning method, and the final integrated prediction is realized through the integrated model overall framework shown in fig. 3 after the basic model is realized. Each model is provided with a weight adjustment module, the weight of each model is updated according to the feedback of the prediction result of the single model in each time period, and the integrated prediction result is obtained by multiplying the prediction result of each model by the weight.
The simulation prediction base model used in the embodiments of the present invention will be further explained below with reference to table 1, and shown in table 1 are a base model (Method used) and a corresponding parameter setting (Parameters) that are preferably used in the embodiments.
In some examples, the simulated prediction base model may use a limit tree regression model (Extra Tree Regression), and model parameters may set, for example, the Number of trees, the maximum depth of the trees, etc., such as "Number of trees=100, maximum tree depth =none.
In some examples, the simulated prediction base model may use a K-Nearest Neighbor model (K-Nearest Neighbor), and model parameters may set, for example, the number of neighbors, the Leaf size, etc., as "Number of neighbors =7, leaf size=30".
In some examples, the simulated prediction base model may use a linear regression model (Linear Regression), and model parameters may set regularization strength parameters, such as "Regularization strength =0.1", for example.
In some examples, the simulated predictive basis model may use a Random Forest regression model (Random Forest), and model parameters may set, for example, the Number of trees, the maximum depth of the trees, etc., such as "Number of trees=100, maximum tree depth =none".
In some examples, the simulated predictive base model may use a polar gradient lifting model (Extreme Gradient Boosting, XGBoosting), and model parameters may set, for example, a Learning Rate, a maximum depth of tree, a Number of trees, etc., such as "Learning rate=0.1, maximum tree depth =4, number of trees=200.
In some examples, the simulated prediction base model may use a Decision Tree model (Decision Tree), and model parameters may set the maximum depth of the Tree, such as "Maximum Tree depth =5", for example.
In some examples, the simulated prediction base model may use a Multi-layer perceptron model (Multi-layer prediction), and model parameters may set, for example, hidden layer size, learning Rate, maximum number of iterations, etc., such as "Hidden Layer sizes = (100, 50), learning rate=0.3, maximum number of iterations =1000.
In some examples, the simulated prediction base model may use a support vector machine regression model (Support Vector Regression), and model parameters may set, for example, core type, core level, epsilon values in the model, such as "Kernel type=" polynominal ", degree of the Kernel =2, epsilon in the model =0.01.
In some examples, the simulated prediction base model may use a bridge regression model (Bridge Regression), and model parameters may set, for example, a maximum iteration number, regularized strength parameters, etc., such as "Maximum number of iterations =1000, regularization strength =0.5".
In some examples, the simulated prediction base model may use a limited boltzmann machine model (Restricted Boltzmann Machines), and model parameters may set, for example, a Learning rate, a maximum number of iterations, etc., such as "Learning rate=0.1, maximum number of iterations =200".
It is noted that the above examples are provided for illustrative purposes and should not be construed as limiting. Also, the method may additionally or alternatively include other base models without departing from the scope of the application.
Step S22: calculating the integrated prediction results of all the simulation prediction basic models according to the output data of each simulation prediction basic model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1.
In the embodiment of the present application, a schematic structural diagram of setting a corresponding weight adjustment module for each simulation prediction base model is shown in fig. 3, and n base models are total, namely a prediction model 1, a prediction model 2, a … … prediction model n-1, and a prediction model n. And setting a weight adjustment module (not shown) for each prediction model, wherein the weight adjustment module evaluates according to feedback of the prediction result of the single model in each time period, so as to update the weight of each model, and multiplying the prediction result of each model by the weight to obtain an integrated prediction result.
More preferably, the process of updating the model weight by the weight adjustment module is as follows:
step A: and initializing the average value of the weight of each model.
Specifically, the weight of each model is initialized to the same value, for example, n base models, and then the weight of each model should be initialized to a weight of 1/n.
And (B) step (B): and predicting the resource use condition of the preset task in the current period by using the simulation prediction basic model, and evaluating the feedback of each simulation prediction basic model to the environment based on an evaluation feedback algorithm.
Specifically, the jth prediction model predicts the resource usage of the ith task in the next time period according to the resource usage record within a period Δt.
In an embodiment of the present invention, the evaluation feedback algorithm includes: acquiring a first error value between a resource use predicted value and a resource use actual value in the previous period and a second error value between the resource use predicted value and the resource use actual value in the current period; and judging a performance evaluation feedback result of the simulation prediction basic model according to the comparison result of the first error value and the second error value, and adjusting model weight according to the performance evaluation feedback result so as to improve the model weight with good performance evaluation feedback.
Specifically, an error between the predicted value of the resource usage and the actual value of the resource usage in the previous period (Δt-1) is defined as a first error, and an error between the predicted value of the resource usage and the actual value of the resource usage in the current period Δt is defined as a second error. The implementation of the evaluation feedback algorithm shown in the following is described, wherein the first error is E Δt-1 The second error is E Δt The Feedback is Feedback.
The specific feedback evaluation algorithm is as follows:
1 :Let E △t <— 0.5*MAE+0.5*RMSE;
2:if E △t <E △t-1 then;
3:Feedback<—“good”;
4:elseif E △t > E △t-1 then;
5:Feedback<—“bad”;
6:else;
7:Feedback<—“none”;
8:ReturnFeedback。
if the first error is larger than the second error, the model error is reduced, and the prediction performance of the model is improved; if the first error is smaller than the second error, the model error becomes larger as the time period passes, and the prediction performance of the model becomes worse; if the first error is equal to the second error, the model error is unchanged, and the prediction performance of the model is unchanged as the time period passes. According to the prediction performance of the model, the feedback result is divided into good, none and bad, the weight of the model with the feedback result of good is correspondingly improved, the weight of the model with the feedback result of none is not changed, and the weight of the model with the feedback result of bad is correspondingly reduced.
Further, in order to measure the prediction performance of the integrated model, mean Absolute Error (MAE), root Mean Square Error (RMSE) is used as a measure. Taking the second error as E Δt The following are illustrative examples:
wherein y is i (Δt) represents a true value; y' i (Δt) represents a predicted value; the parameter p takes a number smaller than 1 (0.9-0.95) when the predicted value is larger than the true value, and takes a number larger than 1 when the predicted value is smaller than the true value. The method is characterized in that in the task simulation process, the system is scheduled and optimized on the premise of ensuring that the task obtains relatively sufficient computing resources to ensure the completion time limit as much as possible, so that higher tolerance can be obtained by higher prediction, and lower prediction can be relatively unsatisfactory.
It should be noted that the performance of a simulation task is affected by many different resources, so the usage tracks of different resources should be monitored, collected and recorded, and a model that uses a single data to make a subsequent prediction cannot perform the prediction task well. Therefore, according to the working characteristics of the simulation task and the optimization requirements, the following characteristic parameters are adopted as the prediction basis of the follow-up simulation resource prediction unit. Wherein the preset features include, but are not limited to, CPU pre-assigned values, maximum current/port/voltage source frequency, simulation accuracy, transient analysis start/stop time, transient analysis maximum/minimum time step; runtime features include, but are not limited to, CPU utilization, memory utilization, limited cycle ratio, and the like.
Step C: and updating the weight of each simulation prediction model according to the evaluation result, and accordingly determining the integrated prediction result of the resource use condition of the preset task in the next period.
Thereafter, by comparing the actual resource usage amountsAnd forecast resource usage->The difference between the two is used for evaluating the feedback of the model to the environment, the weight of the model is updated according to the evaluation result, and the integrated prediction result of the resource use of the ith task in the next time period is determined>
Integrating prediction resultsCan be expressed as follows by the following formula (4):
wherein,representing an integrated prediction result of the next time period of the task i; />Representing a current time period; />Representing the weight of the jth predictive model on the ith task; />Representing model j at +.>Predicted resource usage at time.
For the convenience of understanding by those skilled in the art, the specific implementation procedure of the prediction algorithm is as follows:
specifically, P j (H i (Δt)) represents predictions made by model j from historical resources of task i, F i,j Feedback of prediction result of task i by representing model j, y i (Δt) represents the actual resource usage of task i at Δt, y i,j (Δt) represents the resource prediction of task i at Δt by model j, W i,j Representing the weight of the jth predictive model on the ith task, J i Representing the ith task, en i (Δt+1) represents the integrated prediction result for the next period of time of the task i.
Step S23: and adjusting EDA task resources on each computing node according to the integrated prediction result.
In the embodiment of the invention, the process of adjusting EDA task resources comprises the following steps: for a plurality of running tasks, firstly, carrying out resource adjustment on tasks needing to reduce resources in the running tasks so as to release redundant resources and reckon with idle resources; and performing resource adjustment for the tasks needing to be added with resources in the running tasks, if the idle resources meet the needed added resources, correspondingly adding the resources for the corresponding tasks, otherwise, not performing resource adjustment.
Specifically, the specific process of the task resource adjustment algorithm in operation is as follows:
specifically, each computing node dynamically schedules EDA task resources, which comprises the following steps: firstly, carrying out load fluctuation prediction on the running EDA task by using the method of the steps S21 and S22, and adjusting task resources according to a prediction result, wherein an adjusting algorithm is as follows: wherein J i For the running task with the number i, k running tasks are provided, R i Occupy resources for the task, A i The number of resources that need to be adjusted for the task (A when adding resources i Greater than 0, A when reducing resources i Less than 0), R l For this reason the resources in the node are idle. The algorithm firstly adjusts the tasks needing to reduce resources in the tasks according to task numbers, re-counts the resources released after the tasks are adjusted into an idle resource pool, adjusts the tasks needing to increase the resources, increases the resources for the tasks if the idle resource pool is enough, and does not adjust the tasks in the round if the idle resource pool is insufficient.
After the above step of adjusting the resources of the task in operation is completed, the task scheduling in the queuing queue is performed, and the specific process of the queuing task scheduling algorithm is as follows:
specifically, the method comprises the following steps: where Ji is the task in the queue numbered i, there are n running tasks in total,the number of idle resources is the current number; />An initial number of resources allocated to job i; the algorithm is carried out after the adjustment algorithm, tasks are ordered in the queue according to the submitting time, scheduling is carried out according to the number of the current idle resources and task numbers in the queue, the idle resources are enough to allocate resources for the tasks, and the next task is considered when the idle resources are insufficient until all the tasks or idle resources in the queue are traversed The source is zero.
For ease of understanding by those skilled in the art, the specific process of tuning EDA task resources is described in connection with the specific example in FIG. 4:
step S01: traversing the task Ji in operation;
step S02: judging whether the task Ji needs to reduce resources;
step S03: if the task Ji does not need to reduce resources, judging whether traversing is completed or not;
if the traversal is not completed, returning to the step S01; if the traversal is completed, jumping to step S07;
step S04: if the task Ji needs to reduce the resources, reducing the resources of the task Ji according to the predicted value;
step S05: accounting for reduced resources into free resources;
step S06: judging whether traversing is completed or not;
if the traversal is not completed, returning to the step S01; if the traversal is completed, the step S07 is entered;
step S07: traversing the task Ji in operation again;
step S08: judging whether resources need to be added or not;
if yes, go to step S09; if not, the step S13 is carried out;
step S09: judging whether the idle resources are sufficient or not; if so, the step S10 is carried out, and if not, the step S07 is returned to;
step S10: adding resources of the task Ji according to the model predicted value;
step S11: deducting the added resources from the free resource pool;
Step S12: judging whether traversing is completed or not;
if the traversal is completed, the step S14 is entered; if the traversal is not completed, returning to the step S07;
step S13: judging whether traversing is completed or not;
if the traversal is completed, the step S14 is entered; if the traversal is not completed, returning to the step S07;
step S14: traversing tasks Ji in a queuing queue;
step S15: starting resources required by the task Ji;
step S16: judging whether the idle resource pool is sufficient or not;
if so, go to step S17; if not, the process proceeds to step S20;
step S17: allocating resources for the task and starting to run;
step S18: deducting the allocated resources from the free resource pool;
step S19: judging whether traversing is completed or not;
if the traversal is completed, the step S21 is entered; if the traversal is not completed, returning to the step S14;
step S20: judging whether traversing is completed or not;
if the traversal is completed, the step S21 is entered; if the traversal is not completed, returning to the step S14;
step S21: ending the flow.
In summary, the dynamic resource scheduling method for EDA simulation tasks provided by the embodiment of the invention performs resource prediction and dynamic adjustment on the tasks running in the system after the tasks start. The method of the embodiment of the invention redistributes redundant resources at the task demand reduction stage compared with fixed static resource distribution, as shown in fig. 5, so as to solve the problem of resource waste caused by the lack of attention to load fluctuation during EDA task operation in the past; and then, the task scheduling algorithm is used for completing the scheduling of the tasks in the queue, and the later small tasks in the scheduling queue are considered when the resources are insufficient. The method greatly improves the utilization rate of the whole computing resources of the system and the execution efficiency of the whole task set.
Fig. 8 is a schematic structural diagram of an EDA simulation task dynamic resource scheduling device according to an embodiment of the present invention. The dynamic resource scheduling device for EDA simulation tasks in the embodiment of the invention comprises a model and weight setting module 1301, an integrated prediction module 1302 and a resource adjustment module 1303.
The model and weight setting module 1301 is configured to set a plurality of simulation prediction base models for predicting computing resources required by the EDA simulation task, and set a corresponding weight adjustment module for each simulation prediction base model; and the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result.
The integrated prediction module 1302 is configured to calculate integrated prediction results of all the simulation prediction base models according to the output data of each simulation prediction base model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1.
The resource adjustment module 1303 is configured to adjust EDA task resources on each computing node according to the integrated prediction result.
In some examples, the process of updating model weights by the weight adjustment module in model and weight setting module 1301 includes: initializing the average value of the weight of each model; predicting the resource use condition of a preset task in the current period by using a simulation prediction basic model, and evaluating the feedback of each simulation prediction basic model to the environment based on an evaluation feedback algorithm; and updating the weight of each simulation prediction model according to the evaluation result, and accordingly determining the integrated prediction result of the resource use condition of the preset task in the next period.
Further, the evaluation feedback algorithm includes: acquiring a first error value between a resource use predicted value and a resource use actual value in the previous period and a second error value between the resource use predicted value and the resource use actual value in the current period; and judging a performance evaluation feedback result of the simulation prediction basic model according to the comparison result of the first error value and the second error value, and adjusting model weight according to the performance evaluation feedback result so as to improve the model weight with good performance evaluation feedback.
Further, judging a performance evaluation feedback result of the simulation prediction basic model according to a comparison result of the first error value and the second error value, and adjusting model weights according to the performance evaluation feedback result to improve model weights with good performance evaluation feedback, wherein the performance evaluation feedback method comprises the following steps: if the first error is larger than the second error, determining that the prediction performance of the simulation prediction basic model is good, and correspondingly improving the model weight of the simulation prediction basic model; if the first error is equal to the second error, determining that the prediction performance of the simulation prediction basic model is unchanged, and not changing the model weight of the simulation prediction basic model; if the first error is smaller than the second error, determining that the prediction performance of the simulation prediction basic model is poor, and correspondingly reducing the model weight of the simulation prediction basic model.
The calculation mode of the first error value comprises the following steps: calculating a corresponding first mean absolute error and a corresponding first root mean square error based on the predicted value of the resource use and the actual value of the resource use in the last period, and calculating a weighted sum of the first mean absolute error and the first root mean square error according to a preset proportion so as to obtain a first error value; and/or the calculating manner of the second error value comprises: and calculating a corresponding second mean absolute error and second root mean square error based on the resource use predicted value and the resource use actual value of the current period, and calculating a weighted sum of the second mean absolute error and the second root mean square error according to a preset proportion, so as to obtain the second error value.
In some examples, the integrated prediction result is expressed as follows:
wherein,representing an integrated prediction result of the next time period of the task i; />Representing a current time period; />Representing the weight of the jth predictive model on the ith task; />Representing model j at +.>Predicted resource usage at time.
In some examples, the process of the resource adjustment module 1303 adjusting the EDA task resource includes: for a plurality of running tasks, firstly, carrying out resource adjustment on tasks needing to reduce resources in the running tasks so as to release redundant resources and reckon with idle resources; and performing resource adjustment for the tasks needing to be added with resources in the running tasks, if the idle resources meet the needed added resources, correspondingly adding the resources for the corresponding tasks, otherwise, not performing resource adjustment.
In some examples, the simulated predictive base model includes a combination of any one or more of the following: a limit tree regression model, a K-nearest neighbor classification model, a linear regression model, a random forest regression model, an extreme gradient lifting model, a decision tree model, a multi-layer perceptron model, a support vector machine regression model, a bridge regression model, and a limited Boltzmann machine model.
It should be noted that, when the EDA simulation task dynamic resource scheduling device provided in the foregoing embodiment performs EDA simulation task dynamic resource scheduling, only the division of each program module is used for illustration, and in practical application, the process allocation may be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processes described above. In addition, the device for dynamic resource scheduling of the EDA simulation task provided in the foregoing embodiment and the method embodiment for dynamic resource scheduling of the EDA simulation task belong to the same concept, and detailed implementation processes of the device and the method embodiment are detailed and will not be described herein.
Referring to fig. 9, an alternative hardware structure diagram of an electronic terminal 1400 provided in an embodiment of the present invention may be shown, and the terminal 1400 may be a mobile phone, a computer device, a tablet device, a personal digital processing device, a factory background processing device, etc. The electronic terminal 1400 includes: at least one processor 1401, memory 1402, at least one network interface 1404, and a user interface 1406. The various components in the device are coupled together by bus system 1405. It is appreciated that bus system 1405 is used to enable connected communications between these components. Bus system 1405 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus systems in fig. 9.
The user interface 1406 may include, among other things, a display, keyboard, mouse, trackball, click gun, keys, buttons, touch pad, or touch screen, etc.
It is to be appreciated that memory 1402 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), which serves as an external cache, among others. By way of example, and not limitation, many forms of RAM are available, such as static random Access Memory (SRAM, staticRandom Access Memory), synchronous static random Access Memory (SSRAM, synchronous Static RandomAccess Memory). The memory described by embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
Memory 1402 in embodiments of the present invention is used to store various types of data to support the operation of electronic terminal 1400. Examples of such data include: any executable programs for operating on electronic terminal 1400, such as operating system 14021 and application programs 14022; the operating system 14021 contains various system programs, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and handling hardware-based tasks. The application 14022 may include various applications such as a media player (MediaPlayer), a Browser (Browser), and the like for implementing various application services. The dynamic resource scheduling method for EDA simulation tasks provided by the embodiment of the invention can be contained in the application 14022.
The method disclosed in the above embodiment of the present invention may be applied to the processor 1401 or implemented by the processor 1401. The processor 1401 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware in the processor 1401 or instructions in the form of software. The processor 1401 as described above may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 1401 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor 1401 may be a microprocessor or any conventional processor or the like. The steps of the accessory optimization method provided by the embodiment of the invention can be directly embodied as the execution completion of the hardware decoding processor or the execution completion of the hardware and software module combination execution in the decoding processor. The software modules may be located in a storage medium having memory and a processor reading information from the memory and performing the steps of the method in combination with hardware.
In an exemplary embodiment, the electronic terminal 1400 may be implemented by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable LogicDevice) for performing the aforementioned methods.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the application provides the EDA simulation task dynamic resource scheduling method, the EDA simulation task dynamic resource scheduling device, the EDA simulation task dynamic resource scheduling terminal and the EDA simulation task dynamic resource scheduling medium. After the simulation jobs of the same scale and type are submitted, the technical problem that the utilization rate of the computing resources of the original system is lower in a longer period of time (only about 50% and the fluctuation range is larger) is solved. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (8)

1. The EDA simulation task dynamic resource scheduling method is characterized by comprising the following steps of:
setting a plurality of simulation prediction basic models for predicting computing resources required by EDA simulation tasks, and setting a corresponding weight adjustment module for each simulation prediction basic model; the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result;
calculating the integrated prediction results of all the simulation prediction basic models according to the output data of each simulation prediction basic model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1;
adjusting EDA task resources on each computing node according to the integrated prediction result;
the process of updating the model weight by the weight adjustment module comprises the following steps: initializing the average value of the weight of each model; predicting the resource use condition of a preset task in the current period by using a simulation prediction basic model, and evaluating the feedback of each simulation prediction basic model to the environment based on an evaluation feedback algorithm; updating the weight of each simulation prediction model according to the evaluation result, and accordingly determining an integrated prediction result of the resource use condition of the preset task in the next period;
The evaluation feedback algorithm includes: acquiring a first error value between a resource use predicted value and a resource use actual value in the previous period and a second error value between the resource use predicted value and the resource use actual value in the current period; judging a performance evaluation feedback result of the simulation prediction basic model according to a comparison result of the first error value and the second error value, and adjusting model weights according to the performance evaluation feedback result to improve the model weights with good performance evaluation feedback;
judging a performance evaluation feedback result of the simulation prediction basic model according to a comparison result of the first error value and the second error value, and adjusting model weights according to the performance evaluation feedback result to improve the model weights with good performance evaluation feedback, wherein the performance evaluation feedback method comprises the following steps of: if the first error is larger than the second error, determining that the prediction performance of the simulation prediction basic model is good, and correspondingly improving the model weight of the simulation prediction basic model; if the first error is equal to the second error, determining that the prediction performance of the simulation prediction basic model is unchanged, and not changing the model weight of the simulation prediction basic model; if the first error is smaller than the second error, determining that the prediction performance of the simulation prediction basic model is poor, and correspondingly reducing the model weight of the simulation prediction basic model.
2. The EDA simulation task dynamic resource scheduling method of claim 1, comprising:
the calculation mode of the first error value comprises the following steps: calculating a corresponding first mean absolute error and a corresponding first root mean square error based on the predicted value of the resource use and the actual value of the resource use in the last period, and calculating a weighted sum of the first mean absolute error and the first root mean square error according to a preset proportion so as to obtain a first error value; and/or
The calculating mode of the second error value comprises the following steps: and calculating a corresponding second mean absolute error and second root mean square error based on the resource use predicted value and the resource use actual value of the current period, and calculating a weighted sum of the second mean absolute error and the second root mean square error according to a preset proportion, so as to obtain the second error value.
3. The EDA simulation task dynamic resource scheduling method of claim 1, wherein the integrated prediction result is expressed as follows:
wherein En is i (Δt+1) represents an integrated prediction result of the next period of time of the task i; Δt represents the current time period; w (W) i,j Representing the weight of the jth predictive model on the ith task; y is i ,j The (Δt+1) represents the predicted resource usage of the model j for the task i at the time (Δt+1).
4. The method for dynamic resource scheduling of EDA simulation tasks according to claim 1, wherein the process of adjusting the EDA task resources comprises: for a plurality of running tasks, firstly, carrying out resource adjustment on tasks needing to reduce resources in the running tasks so as to release redundant resources and reckon with idle resources; and performing resource adjustment for the tasks needing to be added with resources in the running tasks, if the idle resources meet the needed added resources, correspondingly adding the resources for the corresponding tasks, otherwise, not performing resource adjustment.
5. The EDA simulation task dynamic resource scheduling method of claim 1, wherein the simulation prediction base model comprises any one or a combination of the following: a limit tree regression model, a K-nearest neighbor classification model, a linear regression model, a random forest regression model, an extreme gradient lifting model, a decision tree model, a multi-layer perceptron model, a support vector machine regression model, a bridge regression model, and a limited Boltzmann machine model.
6. An EDA simulation task dynamic resource scheduling device, comprising:
the model and weight setting module is used for setting a plurality of simulation prediction basic models for predicting computing resources required by the EDA simulation task, and setting a corresponding weight adjustment module for each simulation prediction basic model; the weight adjustment module evaluates according to feedback of the prediction result of the corresponding model in each period, and updates the weight of the corresponding model according to the evaluation result;
The integrated prediction module is used for calculating integrated prediction results of all the simulation prediction basic models according to the output data of each simulation prediction basic model and the corresponding latest weight; wherein the sum of the weight values of the weight adjustment modules is 1;
the resource adjustment module is used for adjusting EDA task resources on each computing node according to the integrated prediction result;
the process of updating the model weight by the weight adjustment module comprises the following steps: initializing the average value of the weight of each model; predicting the resource use condition of a preset task in the current period by using a simulation prediction basic model, and evaluating the feedback of each simulation prediction basic model to the environment based on an evaluation feedback algorithm; updating the weight of each simulation prediction model according to the evaluation result, and accordingly determining an integrated prediction result of the resource use condition of the preset task in the next period; the evaluation feedback algorithm includes: acquiring a first error value between a resource use predicted value and a resource use actual value in the previous period and a second error value between the resource use predicted value and the resource use actual value in the current period; judging a performance evaluation feedback result of the simulation prediction basic model according to a comparison result of the first error value and the second error value, and adjusting model weights according to the performance evaluation feedback result to improve the model weights with good performance evaluation feedback; judging a performance evaluation feedback result of the simulation prediction basic model according to a comparison result of the first error value and the second error value, and adjusting model weights according to the performance evaluation feedback result to improve the model weights with good performance evaluation feedback, wherein the performance evaluation feedback method comprises the following steps of: if the first error is larger than the second error, determining that the prediction performance of the simulation prediction basic model is good, and correspondingly improving the model weight of the simulation prediction basic model; if the first error is equal to the second error, determining that the prediction performance of the simulation prediction basic model is unchanged, and not changing the model weight of the simulation prediction basic model; if the first error is smaller than the second error, determining that the prediction performance of the simulation prediction basic model is poor, and correspondingly reducing the model weight of the simulation prediction basic model.
7. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the EDA simulation task dynamic resource scheduling method of any of claims 1 to 5.
8. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so that the terminal executes the EDA simulation task dynamic resource scheduling method according to any one of claims 1 to 5.
CN202311204519.7A 2023-09-19 2023-09-19 Dynamic resource scheduling method, device, terminal and medium for EDA simulation task Active CN116932174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311204519.7A CN116932174B (en) 2023-09-19 2023-09-19 Dynamic resource scheduling method, device, terminal and medium for EDA simulation task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311204519.7A CN116932174B (en) 2023-09-19 2023-09-19 Dynamic resource scheduling method, device, terminal and medium for EDA simulation task

Publications (2)

Publication Number Publication Date
CN116932174A CN116932174A (en) 2023-10-24
CN116932174B true CN116932174B (en) 2023-12-08

Family

ID=88379325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311204519.7A Active CN116932174B (en) 2023-09-19 2023-09-19 Dynamic resource scheduling method, device, terminal and medium for EDA simulation task

Country Status (1)

Country Link
CN (1) CN116932174B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117155539B (en) * 2023-10-31 2024-01-30 浙江大学 Confusion of analog radio frequency circuit netlist, restoration method, device, terminal and medium thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258767A (en) * 2020-01-22 2020-06-09 中国人民解放军国防科技大学 Intelligent cloud computing resource allocation method and device for complex system simulation application
WO2020134133A1 (en) * 2018-12-27 2020-07-02 国网江苏省电力有限公司南京供电分公司 Resource allocation method, substation, and computer-readable storage medium
CN113378498A (en) * 2021-08-12 2021-09-10 新华三半导体技术有限公司 Task allocation method and device
CN114297935A (en) * 2021-12-30 2022-04-08 中国民用航空总局第二研究所 Airport terminal building departure optimization operation simulation system and method based on digital twin
CN114327861A (en) * 2021-11-17 2022-04-12 芯华章科技股份有限公司 Method, apparatus, system and storage medium for executing EDA task
CN115952054A (en) * 2022-12-22 2023-04-11 广州文远知行科技有限公司 Simulation task resource management method, device, equipment and medium
CN116137593A (en) * 2023-02-20 2023-05-19 重庆邮电大学 Virtual network function migration method for digital twin auxiliary dynamic resource demand prediction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220066824A1 (en) * 2020-08-31 2022-03-03 Synopsys, Inc. Adaptive scheduling with dynamic partition-load balancing for fast partition compilation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020134133A1 (en) * 2018-12-27 2020-07-02 国网江苏省电力有限公司南京供电分公司 Resource allocation method, substation, and computer-readable storage medium
CN111258767A (en) * 2020-01-22 2020-06-09 中国人民解放军国防科技大学 Intelligent cloud computing resource allocation method and device for complex system simulation application
CN113378498A (en) * 2021-08-12 2021-09-10 新华三半导体技术有限公司 Task allocation method and device
CN114327861A (en) * 2021-11-17 2022-04-12 芯华章科技股份有限公司 Method, apparatus, system and storage medium for executing EDA task
CN114297935A (en) * 2021-12-30 2022-04-08 中国民用航空总局第二研究所 Airport terminal building departure optimization operation simulation system and method based on digital twin
CN115952054A (en) * 2022-12-22 2023-04-11 广州文远知行科技有限公司 Simulation task resource management method, device, equipment and medium
CN116137593A (en) * 2023-02-20 2023-05-19 重庆邮电大学 Virtual network function migration method for digital twin auxiliary dynamic resource demand prediction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Loihi: A Neuromorphic Manycore Processor wity On-Chip Learning;Mike Davies等;《IEEE Computer Society》;第38卷(第1期);全文 *
一种云环境下的主机负载预测方法;江伟;陈羽中;黄启成;刘漳辉;刘耿耿;;计算机科学(S1);全文 *
江伟 ; 陈羽中 ; 黄启成 ; 刘漳辉 ; 刘耿耿 ; .一种云环境下的主机负载预测方法.计算机科学.2018,(S1),全文. *

Also Published As

Publication number Publication date
CN116932174A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
Jyoti et al. Dynamic provisioning of resources based on load balancing and service broker policy in cloud computing
Kim et al. Machine learning for design space exploration and optimization of manycore systems
JP7430744B2 (en) Improving machine learning models to improve locality
CN116932174B (en) Dynamic resource scheduling method, device, terminal and medium for EDA simulation task
US20140343711A1 (en) Decision support system for order prioritization
CN110689121A (en) Method for realizing neural network model splitting by using multi-core processor and related product
CN109996247B (en) Networked resource allocation method, device, equipment and storage medium
JP7246447B2 (en) Model training method, apparatus, electronic device, storage medium, development system and program
CN114936085A (en) ETL scheduling method and device based on deep learning algorithm
US20210304066A1 (en) Partitioning for an execution pipeline
CN113110914A (en) Internet of things platform construction method based on micro-service architecture
Deng et al. Reliability-aware task scheduling for energy efficiency on heterogeneous multiprocessor systems
KR20210148586A (en) Scheduler, method for operating the same and accelerator system including the same
JP2021022373A (en) Method, apparatus and device for balancing loads, computer-readable storage medium, and computer program
Pandey et al. Energy efficiency strategy for big data in cloud environment using deep reinforcement learning
Wu et al. Intelligent fitting global real‐time task scheduling strategy for high‐performance multi‐core systems
EP4280107A1 (en) Data processing method and apparatus, device, and medium
JP2020079991A (en) Optimization apparatus, control method of optimization apparatus, and control program of optimization apparatus
CN116560968A (en) Simulation calculation time prediction method, system and equipment based on machine learning
EP4258169A1 (en) Model training method, apparatus, storage medium, and device
CN115883550A (en) Task processing method, device, electronic equipment, storage medium and program product
He et al. HOME: A holistic GPU memory management framework for deep learning
Li et al. An application-oblivious memory scheduling system for DNN accelerators
KR20230068709A (en) Scheduler, method for operating the same and electronic device including the same
Tang et al. Edge computing energy-efficient resource scheduling based on deep reinforcement learning and imitation learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant