CN118277273A - MOM system global resource collaborative scheduling and tracking mapping method - Google Patents

MOM system global resource collaborative scheduling and tracking mapping method Download PDF

Info

Publication number
CN118277273A
CN118277273A CN202410520593.8A CN202410520593A CN118277273A CN 118277273 A CN118277273 A CN 118277273A CN 202410520593 A CN202410520593 A CN 202410520593A CN 118277273 A CN118277273 A CN 118277273A
Authority
CN
China
Prior art keywords
resource
task
scheduling
resources
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410520593.8A
Other languages
Chinese (zh)
Inventor
王维龙
洪佳吟
江雄锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Runtop Iot Technology Co ltd
Original Assignee
Xiamen Runtop Iot Technology Co ltd
Filing date
Publication date
Application filed by Xiamen Runtop Iot Technology Co ltd filed Critical Xiamen Runtop Iot Technology Co ltd
Publication of CN118277273A publication Critical patent/CN118277273A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a MOM system global resource collaborative scheduling and tracking mapping method, which ensures efficient interaction and data synchronization between models by constructing a multi-model simulation tool and applying a fuzzy clustering algorithm. The system utilizes cloud resource dynamic allocation to realize on-demand allocation and recovery of computing resources and ensure service continuity. Registering and verifying a simulation tool, and constructing intelligent task scheduling optimization resource management by combining a fuzzy clustering algorithm. The adoption of containerized deployment and multi-mode strategies accelerates compiling and deployment processes, achieves intelligent coordination of resources and tasks in a cross-level manner and efficient coordination management of the resources and the tasks, and greatly enhances simulation and scheduling capability in a complex environment.

Description

MOM system global resource collaborative scheduling and tracking mapping method
Technical Field
The invention relates to the field of industrial software and artificial intelligence, in particular to a MOM system global resource collaborative scheduling and tracking mapping method.
Background
In the context of current rapidly evolving artificial intelligence and intelligent manufacturing, MOM systems are in an unprecedented period of innovation. As a key tool, MOM systems are widely used in modern enterprises aimed at efficiently performing and managing production activities. The functions of the system cover the aspects of production planning, inventory management, production scheduling, quality management, equipment maintenance, personnel management and the like, so as to optimize the production flow and improve the production efficiency and the product quality. The need for manufacturing enterprises to develop their own MOM systems is becoming increasingly urgent.
Current low-code development platforms provide rich tools, but in enterprise practice, developing MOM systems using low-code auxiliary development tools often faces a variety of problems: high development cost, repeated redundancy of service components, and compatibility conflict of different systems and interfaces. To address such issues, MOM low code software development platforms have evolved. However, in the current manufacturing environment, challenges such as frequent customer demand change, complex and diversified production processes, personalized market demands and the like continuously appear, and the traditional business collaborative scheduling and deployment method has difficulty in meeting the requirements of flexibility and efficiency.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a MOM system global resource collaborative scheduling and tracking mapping method combining a multi-model associated simulation debugging tool, a hierarchical heterogeneous program compiling and linkage deployment technology, which ensures that the MOM system efficiently performs service development and deployment in the field of low-code software development in a multi-field environment.
The technical scheme of the invention is as follows.
A MOM system global resource collaborative scheduling and tracking mapping method comprises the following steps:
S1, constructing a multi-model simulation debugging tool to ensure the relevance between different models, and assembling and explaining the models by using a fuzzy mean clustering algorithm to ensure the consistency of data and reliable model interaction;
S2, registering a multi-model simulation debugging tool, adopting a layering heterogeneous program compiling and linkage deployment technology, and combining a comprehensive matching degree evaluation algorithm, a multi-language integrated construction, workflow task scheduling and a multi-model partition EDF algorithm to take charge of different calculation and storage tasks at different levels;
s3, based on a global resource multi-engine collaborative scheduling technology, an integrated scheduling framework of resource ferrying between front, middle and background horizontal and vertical is adopted to cooperatively schedule and operate a front scene unit logic scheduling engine, a middle service function mirror scheduling engine and a background service scheduling execution scheduling engine; the integrated scheduling framework calls a multi-model simulation debugging tool, sorts the tasks, calculates priority weights and initializes a resource list, and distributes the tasks according to rules.
Preferably, the S1 specifically includes:
S101, receiving an instruction of an integrated scheduling framework through a scene design language, and performing dictionary matching and association on MOM design semantics;
S102, assembling and explaining a model according to scene MOM design semantics by using a fuzzy mean clustering algorithm;
s103, dynamically distributing resources to be assembled by utilizing a virtual resource pool;
S104, realizing the sustainability and stability of the assembly service through multi-center disaster recovery distributed deployment;
s105, combining heterogeneous programming and parallel mechanisms by using an ARM CPU and GPU multi-core vector processor, ensuring smooth task progress, and realizing serial and parallel management of tasks by programming language expansion;
And S106, the automation instance performs interaction between the roles and the functional models.
S107, compiling a model simulation debugging tool and deploying the model simulation debugging tool in a MOM system.
Preferably, in S103, dynamically allocating the resources to be allocated specifically includes:
Acquiring computing resources from the virtual resource pool by using cloud services, and dynamically distributing the resources in the virtual resource pool according to the requirements of users until each cloud service points to a unique definition set; after the allocated resources are released by the user again, the virtual resource pool is returned for reallocation and reuse.
Preferably, the S2 specifically includes:
s201, registering a multi-model simulation debugging tool, and carrying out validity verification by using a comprehensive matching degree evaluation algorithm;
s202, matching construction tool bags of different programming languages to realize multi-language integrated construction;
S203, automating and optimizing the compiling process through workflow task scheduling;
S204, constructing mirror image management construction environment and dependence by adopting codes, so as to accelerate construction speed;
s205, realizing layered heterogeneous deployment by adopting a containerized workflow task scheduling deployment mode;
s206, adopting multi-mode release of codes to adapt to different deployment scenes and requirements;
S207, optimizing resource allocation and task scheduling by using a multi-mode partition EDF algorithm, and realizing resource binding with an integrated scheduling framework through a RESTful API interface.
Preferably, in S201, the implementation of the comprehensive matching degree evaluation algorithm is specifically as follows:
defining T as a task set to be scheduled, M as a schedulable computation core set, G as a topological semantic graph set, B r as a subset semantic element in the semantic graph set, and B r epsilon G;
defining a task to be scheduled as u, defining resource equipment as v, and constructing a decision pair < u, v >;
The data requirement defining the task to be scheduled u is D u:
Du=[dv1,du2,…,duk]
Wherein k is the number of devices of the MOM system, and d uk is the distribution condition of data required by the execution of the task u to be scheduled in the kth device;
The computing speed of the task u to be scheduled on the computing resource is defined as Q u:
Qu=[qu1,qu2,…,qup]
wherein p is the number of computing resource types, and q up is the computing speed of the task u to be scheduled on the p-th type resource;
Defining the maximum occupied core number of the task u to be scheduled on the computing resource as P u:
Pu=[pu1,pu2,…,pup]
wherein, p up is expressed as the maximum occupied core number of the task u to be scheduled on the p-type resource;
The operation times of various computing resources owned by the resource device v are defined as C v:
Cv=[cv1,cv2,…,cvp]
Wherein c vp represents the operation times of the resource device v in the p-th type resource;
Defining the core number of various computing resources owned by the resource device v as E v:
Ev=[ev1,ev2,…,evp]
Wherein e vp represents the core number of the resource device v in the p-type resource;
Circularly traversing the schedulable computing core set M to obtain a decision pair left node u, and returning a corresponding core quantity value E used; sub-circularly traversing the task set T to be scheduled to obtain a decision pair right node v;
next, the optimal bandwidth allocation vector B is solved using the optimal bandwidth allocation algorithm allocOptBW as follows:
B←allocOptBW(u,v,Br)
The calculation time T cp for all combinations is calculated as follows:
Tcp(u,v)←∑(Qu/(Pu*Cv))
According to the ratio of the data requirement of the task to be scheduled u to the obtained optimal bandwidth allocation vector B, the maximum communication time T cm in all combinations is calculated as follows:
the resource utilization variance σ is calculated as follows:
σ(u,v)←Var((Pu+Eused)/Ev)
Each decision pair is given a priority W tk according to the commit time T sb (u) precedence order of each task, as follows:
Wtk(u,v)←Tsb(u)
Normalizing T cp、Tcm、σ、Wtk to obtain a resource matching degree R, a load balancing degree L and a task fairness F respectively, wherein the resource matching degree R, the load balancing degree L and the task fairness F are as follows:
Calculating a comprehensive matching degree evaluation matrix I, wherein alpha is the relative weight of the resource matching degree, beta is the relative weight of the load balancing degree, and the relative weight of the gamma task fairness is calculated according to the following formula:
I←αR+βL+γF
Finally, setting a null decision set omega, traversing the comprehensive matching degree evaluation matrix I, and judging whether decision pairs < u, v > conflict with the final decision set omega in sequence from large to small; the conflict conditions include that tasks are migrated, resource cores are allocated, resource conflicts and the like, if no conflict exists, the decision pair is added into a decision set omega, and the set is a scheduling decision of an algorithm and determines the assembly sequence and the minimum path of the functional components.
Preferably, in the step S207, the multi-mode partition EDF algorithm is used for calculating a basic time unit of an application scheduling process, where the formula is as follows:
Wherein I represents a parent interval sequence, j represents a child interval sequence of the parent interval, k represents a total length of the parent interval, d max is a maximum relative deadline of all messages, I j is an ith parent interval, b is a base of a child interval exponential function, a is time complexity of system matching, and U i,j represents a jth child interval basic time unit of the ith parent interval;
The calculation steps are as follows:
Assuming that the messages are released at the time 0, the maximum relative deadline of all the messages is d max, firstly dividing the messages into k+1 mother intervals { i 1,i2,…,ik,ik+1 } according to an exponential function partitioning method, wherein the range length of the i-th mother interval is Then, dividing each parent interval I j into q+1 subintervals according to the exponential function partitioning method, wherein the range of the q+1 subintervals of the ith parent interval isAnd finally, arranging according to the maximum relative cut-off time from small to large, scheduling priorities for the tasks, reasonably calling resources according to the sequence, and optimizing resource allocation.
Preferably, the S3 specifically includes:
s301, the integration framework calls a multi-model simulation debugging tool, and the tool determines a task scheduling sequence by using a predefined function;
s302, calculating the priority weight of each task;
S303, sorting tasks in descending order according to priority weights;
s304, initializing a free resource list;
S305, for each task, distributing according to a preset rule;
s306, the priority weights of the unassigned tasks are recalculated, and then the assignment of S305 is repeated to complete all the tasks.
Preferably, the step S305 specifically includes:
If the resource r i required by the task i is smaller than or equal to the minimum resource amount in the current idle resource list A, the task i is allocated to the minimum resource in the available resources, and the allocated resource is removed from the A; otherwise, consider skip task i, proceed to the next task.
As can be seen from the description of the invention, compared with the prior art, the MOM system global resource collaborative scheduling and tracking mapping method provided by the invention combines simulation debugging tools, hierarchical heterogeneous program compiling and linkage deployment technologies associated with multiple models, and aims to overcome the defects of the prior art and realize efficient service development and deployment in a multi-field environment. The following is a detailed description of the advantages and benefits of this approach:
(1) According to the method, through the advantages of simulation debugging tools associated with multiple models, data consistency is guaranteed, system stability is improved, automatic instance interaction and multi-model debugging functions of the method not only improve debugging efficiency, but also reduce time and energy consumption of developers, and efficiency improvement is brought to MOM system development;
(2) The method of the invention utilizes layered heterogeneous program compiling and linkage deployment technology, thus improving compiling and deployment efficiency; through the flexibility of the optimization management and the mirror image management of the task dependency relationship, the system realizes the capability of rapidly deploying and updating the application program, effectively improves the suitability and performance, and shows higher flexibility and expandability under different environments;
(3) The method realizes task scheduling optimization and resource list initialization and rule allocation by means of the integrated scheduling framework of global resource collaboration; through the realization of intelligent task scheduling and task allocation completeness, the system effectively avoids resource contention and conflict, and improves the overall system efficiency;
(4) The method of the invention improves the response speed and flexibility of the functional requirements of the MOM system, reduces the development cost and reduces the resource waste; under the development trend of intelligent manufacturing in the future, the global resource collaborative scheduling and tracking mapping method brings wider development space for enterprises, and the power-assisted manufacturing industry advances to intelligent, digital and sustainable development.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for co-scheduling and tracking mapping of global resources of a MOM system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a basic framework of a method according to an embodiment of the invention;
FIG. 3 is a basic block diagram of a multi-model simulation debug tool in accordance with an embodiment of the present invention;
FIG. 4 is a basic block diagram of a program compiling and linkage deployment technique according to an embodiment of the present invention;
FIG. 5 is a diagram of a workflow task scheduling DAG model in accordance with an embodiment of the present invention;
FIG. 6 is a basic block diagram of a global resource multi-engine collaborative scheduling technique according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention relates to a MOM system global resource collaborative scheduling and tracking mapping method combining a multi-model associated simulation debugging tool and a layered heterogeneous program compiling and linkage deployment technology, which is mainly applied to an MOM low-code development platform in a multi-field environment to efficiently carry out service development and deployment tasks.
Referring to fig. 1, a method for co-scheduling and tracking mapping global resources of a MOM system includes:
S1, constructing a multi-model simulation debugging tool to ensure the relevance between different models, and assembling and explaining the models by using a fuzzy mean clustering algorithm to ensure the consistency of data and reliable model interaction;
S2, registering a multi-model simulation debugging tool, adopting a layering heterogeneous program compiling and linkage deployment technology, and combining a comprehensive matching degree evaluation algorithm, a multi-language integrated construction, workflow task scheduling and a multi-model partition EDF algorithm to take charge of different calculation and storage tasks at different levels;
s3, based on a global resource multi-engine collaborative scheduling technology, an integrated scheduling framework of resource ferrying between front, middle and background horizontal and vertical is adopted to cooperatively schedule and operate a front scene unit logic scheduling engine, a middle service function mirror scheduling engine and a background service scheduling execution scheduling engine; the integrated scheduling framework calls a multi-model simulation debugging tool, sorts the tasks, calculates priority weights and initializes a resource list, and distributes the tasks according to rules.
The basic framework of the method is shown in fig. 2, the reliability of data consistency and model interaction is ensured by using a simulation debugging tool depending on multi-model association, different computing and storage tasks are responsible at different levels by adopting a layered heterogeneous program compiling and linkage deployment technology, and a front-end scene unit logic dispatching engine, a middle-stage service function mirror dispatching engine and a background service scheduling execution dispatching engine are operated in a collaborative mode by adopting an integrated dispatching framework of resource ferrying between front, middle and background horizontal directions.
In the step S1, a multi-model simulation debugging tool is used to ensure the relevance between different models, and a fuzzy mean clustering algorithm is used to assemble and interpret the models so as to ensure the consistency of data and reliable model interaction, and the basic structure of the multi-model simulation debugging tool is shown in fig. 3, and specifically includes the following steps.
S101, receiving an instruction of the integrated scheduling framework through a scene design language so as to perform dictionary matching and association on MOM semantics.
S102, according to scene MOM design semantics, assembling and explaining the model by using a fuzzy mean clustering algorithm so as to cooperatively verify the model association information. Algorithms rely primarily on fuzzy set theory to model and analyze real-life data. The constructed clustering model describes similarity through detailed quantification, and relatively accurately depicts the similarity relation among things in the objective world. The mapping of the data space x to the feature space ψ (x) defining MOM semantics, the nonlinear mapping ψ of the algorithm is as follows:
ψ:x→ψ(x)∈F
where F is a transform feature space with a higher or even infinite dimension.
According to the nonlinear mapping psi, a fuzzy clustering model MOML is obtained as follows:
Wherein, a gaussian function is used as a kernel function, as follows:
K(x,v)=exp(-|x-v|2/2τ2)
Where n is the number of data, i is the number of cluster centers, c is the number of cluster centers, j is the number of cluster centers, μ is a member, x j is the data space of the j-th cluster center, V i is the semantics of the i-th data, V j is the semantics of the j-th cluster center, m is a variable, U is a fuzzy partition matrix, and V is a cluster center vector matrix. K (x, v) is an inner product function.
Wherein τ 2 is the value range. Particularly, when the cluster center vector is not specified, the default value is K (x, v) =1. The initial cluster center, output cluster and cluster center of semantics in MOM are obtained using MOML algorithm.
S103, utilizing the virtual resource pool to effectively manage the resources to be assembled by the calling tool so as to meet the system requirements. The cloud service acquires computing resources from the virtual resource pool, and the resources in the virtual resource pool are dynamically allocated according to the requirements of the user until each cloud service points to a unique definition set. And returning the virtual resource pool after the allocated resources are released by the user, so that the resources are reallocated.
S104, realizing the sustainability and stability of the assembly service by using multi-center disaster recovery distributed deployment, and ensuring the reliability of the system service. Firstly, the remote multi-activity technology of disaster recovery backup can realize complete backup of data. Conventional backup methods may simply periodically backup data to tape or other storage media, which may result in loss and inconsistency of the data. The remote multi-activity technology can backup the data to a plurality of places and store the data redundantly so as to ensure the integrity and availability of the data.
S105, an ARM CPU+GPU multi-core vector processor is used for combining heterogeneous programming and a parallel mechanism to ensure that debugging tasks are smoothly carried out according to expectations, and a code is enabled to support vectorization and multi-core simultaneously through C programming language expansion, such as CUDA and OpenCL through a hierarchical thread/programming model, so that serial and parallel management of tasks is achieved.
S106, the automation instance performs interaction between roles and the functional model, and debugging efficiency and accuracy are improved.
S107, calling a compiling and constructing tool to perform deployment test so as to verify the functions and performances of the system.
In the step S2, in the hierarchical heterogeneous program compiling and linkage deployment technology, a comprehensive matching degree evaluation algorithm, a multi-language integrated construction, workflow task scheduling and a multi-mode partition EDF algorithm are combined, efficiency of compiling construction and deployment is improved through a series of technologies, and a basic structure of the program compiling and linkage deployment technology is shown in fig. 4, and the implementation steps are as follows.
S201, based on a comprehensive matching degree evaluation algorithm (IMDE), registering a multi-model simulation debugging tool, verifying the effectiveness of an evaluation method, and carrying out heterogeneous construction and environment construction according to semantic assembly functional components. This step helps to improve the suitability and performance of the program under different circumstances. The comprehensive matching degree evaluation algorithm is as follows:
defining T as a task set to be scheduled, M as a schedulable computation core set, G as a topological semantic graph set, B r as a subset semantic element in the semantic graph set, and B r epsilon G;
defining a task to be scheduled as u, defining resource equipment as v, and constructing a decision pair < u, v >;
The data requirement defining the task to be scheduled u is D u:
Du=[du1,du2,…,duk]
Wherein k is the number of devices of the MOM system, and d uk is the distribution condition of data required by the execution of the task u to be scheduled in the kth device;
The computing speed of the task u to be scheduled on the computing resource is defined as Q u:
Qu=[qu1,qu2,…,qup]
wherein p is the number of computing resource types, and q up is the computing speed of the task u to be scheduled on the p-th type resource;
Defining the maximum occupied core number of the task u to be scheduled on the computing resource as P u:
Pu=[pu1,pv2,…,pvp]
wherein, p up is expressed as the maximum occupied core number of the task u to be scheduled on the p-type resource;
The operation times of various computing resources owned by the resource device v are defined as C v:
Cv=[cv1,cv2,…,cvp]
Wherein c vp represents the operation times of the resource device v in the p-th type resource;
Defining the core number of various computing resources owned by the resource device v as E v:
Ev=[ev1,ev2,…,evp]
Wherein e vp represents the core number of the resource device v in the p-type resource;
Circularly traversing the schedulable computing core set M to obtain a decision pair left node u, and returning a corresponding core quantity value E used; sub-circularly traversing the task set T to be scheduled to obtain a decision pair right node v;
next, the optimal bandwidth allocation vector B is solved using the optimal bandwidth allocation algorithm allocOptBW as follows:
B←allocOptBW(u,v,Br)
The specific idea of the optimal bandwidth allocation algorithm allocOptBW is as follows:
And iterating the computing resource equipment v to the maximum bandwidth allocation resource of each equipment v' in the schedulable computing core set M, and then adjusting the bandwidth resource allocation of the path for the subset path element B a in the semantic graph set until the size of the subset semantic element B r in the semantic graph set is larger than B a, thereby obtaining the minimum bandwidth allocation resource. Finally, the method is named allocOptBW;
The calculation time T cp for all combinations is calculated as follows:
Tcp(u,v)←∑(Qu/(Pu*Cv))
According to the ratio of the data requirement of the task to be scheduled u to the obtained optimal bandwidth allocation vector B, the maximum communication time T cm in all combinations is calculated as follows:
the resource utilization variance σ is calculated as follows:
σ(u,v)←Var((Pu+Eused)/Ev)
Each decision pair is given a priority W tk according to the commit time T sb (u) precedence order of each task, as follows:
Wtk(u,v)←Tsb(u)
Normalizing T cp、Tcm、σ、Wtk to obtain a resource matching degree R, a load balancing degree L and a task fairness F respectively, wherein the resource matching degree R, the load balancing degree L and the task fairness F are as follows:
Calculating a comprehensive matching degree evaluation matrix I, wherein alpha is the relative weight of the resource matching degree, beta is the relative weight of the load balancing degree, and the relative weight of the gamma task fairness is calculated according to the following formula:
I←αR+βL+γF
Finally, a null decision set omega is set, and whether the decision pair < u, v > conflicts with the final decision set omega is judged sequentially from large to small. The conflict conditions include that tasks are migrated, resource cores are allocated, resource conflicts and the like, if no conflict exists, the decision pair is added into a decision set omega, and the set is a scheduling decision of an algorithm and determines the assembly sequence and the minimum path of the functional components.
S202, multi-language integrated construction is carried out, a business layer is introduced, corresponding construction tool packages are matched according to different programming languages, and multi-language integrated construction is achieved. This approach helps to improve the compilation efficiency and uniformity of cross-language projects.
S203, the workflow task scheduling is used for realizing automation and optimization of the compiling process, and the speed and quality of program construction are improved. The workflow is composed of a group of tasks with a preorder relation, and the tasks are related through data with a dependency relation. Individual tasks in a workflow are atomic in nature and cannot be interrupted during execution. The execution of the subsequent task depends on the execution result of the preceding task. The workflow is generally described by a DAG graph, and a workflow task scheduling flow is shown in fig. 5, wherein the workflow is composed of 7 ordered tasks, a scheduling task 2 and a task 5 are executed from a task 1, the task 2 sequentially executes subsequent tasks until the task 8 is executed, and meanwhile, the task 5 schedules a task 6 and a task 7 and sequentially executes the tasks until the task 8 is executed. Workflow task scheduling may include links such as compiling, testing, packaging, etc., making the overall process smoother.
The workflow task scheduling model is represented by a triplet < WF, R, E > where WF represents n workflow sets, L represents m virtual machine sets, E represents the allocation relationship between the workflow and the resource, as follows:
WF={WF1,WF2,…,WFn}
L={L1,L2,…,Lm}
E={<WFj,Lj>∣WFj∈WF,Lj∈L,0≤i<n,0≤j<m}
S204, by adopting a code image construction mode, construction environments and dependence can be effectively managed, construction consistency and repeatability are improved, and construction speed is increased.
S205, combining the technical means of S201 to S204, realizing layered heterogeneous deployment, and adopting a containerized workflow task scheduling deployment mode. The deployment method can bring higher flexibility and expandability and promote quick deployment and update of the application program.
S206, multi-mode release of codes is realized to adapt to different deployment scenes and requirements, and applicability and deployment flexibility of the program are improved.
S207, in the application orchestration service, the multi-mode partition EDF algorithm is utilized to optimize resource allocation and task scheduling so as to improve deployment efficiency and system performance. And the resource binding is realized with the integrated scheduling framework through the RESTful API interface. Based on the multi-mode partition EDF algorithm, the algorithm is mainly used for calculating a basic time unit of an application programming process, and the formula is as follows:
Wherein I represents a sequence of parent intervals, j represents a sequence of child intervals of the parent interval, k represents the total length of the parent interval, d max is the maximum relative deadline of all messages, I j is the ith parent interval, b is the base of the child interval exponential function, a is the time complexity of system matching, and U i,j represents the jth child interval basic time unit of the ith parent interval;
The calculation steps are as follows:
Assuming that the messages are released at time 0, the maximum relative deadline of all the messages is d max, dividing the messages into k+1 parent intervals { i 1,i2,…,ik,ik+1 } according to an exponential function partitioning method, wherein the range length of the ith parent interval is Then, dividing each parent interval I j into q+1 subintervals according to the exponential function partitioning method, wherein the range of the q+1 subintervals of the ith parent interval isAnd finally, arranging according to the maximum relative cut-off time from small to large, scheduling priorities for the tasks, reasonably calling resources according to the sequence, and optimizing resource allocation.
In the step S3, the front end scene unit logic scheduling engine, the middle-stage service function mirror scheduling engine and the background service scheduling execution scheduling engine are realized to perform cooperative scheduling through a distributed cooperative scheduling algorithm. And realizing the distribution and scheduling of service tasks through the service priority and the resource distribution list. Referring to fig. 6, the basic structure of the global resource multi-engine cooperative scheduling technology, the distributed cooperative scheduling algorithm specifically includes the following flow.
S301, the integration framework calls a multi-model simulation debugging tool, and is assigned with a specified number of j tasks, wherein each task has a priority p i and required resources r i. The virtualized resource pool has k available resources. The order in which tasks are assigned will be based on their priority and the amount of available resources. A function S (i) is defined that orders the tasks to determine their scheduling order.
S302, calculating the priority weight of each task:
Where p i denotes the priority of each task, and r i denotes the resources required for each task.
S303, sorting the tasks in descending order according to the priority weights.
S304, initializing a free resource list A:
A={1,2,…,k}
s305, for each task i, the allocation is performed according to the following rule:
if the required resource r i of the task i is less than or equal to the smallest resource amount in the current free resource list A, the task i is allocated to the smallest resource in the available resources, and the allocated resource is removed from A. Otherwise, consider skip task i, proceed to the next task.
S306, for the unassigned tasks, the priority weights are recalculated, and then step S305 is repeated until all tasks are assigned to be completed.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (8)

1. The MOM system global resource collaborative scheduling and tracking mapping method is characterized by comprising the following steps of:
S1, constructing a multi-model simulation debugging tool to ensure the relevance between different models, and assembling and explaining the models by using a fuzzy mean clustering algorithm to ensure the consistency of data and reliable model interaction;
S2, registering a multi-model simulation debugging tool, adopting a layering heterogeneous program compiling and linkage deployment technology, and combining a comprehensive matching degree evaluation algorithm, a multi-language integrated construction, workflow task scheduling and a multi-model partition EDF algorithm to take charge of different calculation and storage tasks at different levels;
s3, based on a global resource multi-engine collaborative scheduling technology, an integrated scheduling framework of resource ferrying between front, middle and background horizontal and vertical is adopted to cooperatively schedule and operate a front scene unit logic scheduling engine, a middle service function mirror scheduling engine and a background service scheduling execution scheduling engine; the integrated scheduling framework calls a multi-model simulation debugging tool, sorts the tasks, calculates priority weights and initializes a resource list, and distributes the tasks according to rules.
2. The method for co-scheduling and tracking mapping of MOM system global resources according to claim 1, wherein S1 specifically comprises:
S101, receiving an instruction of an integrated scheduling framework through a scene design language, and performing dictionary matching and association on MOM design semantics;
S102, assembling and explaining a model according to scene MOM design semantics by using a fuzzy mean clustering algorithm;
s103, dynamically distributing resources to be assembled by utilizing a virtual resource pool;
S104, realizing the sustainability and stability of the assembly service through multi-center disaster recovery distributed deployment;
s105, combining heterogeneous programming and parallel mechanisms by using an ARM CPU and GPU multi-core vector processor, ensuring smooth task progress, and realizing serial and parallel management of tasks by programming language expansion;
And S106, the automation instance performs interaction between the roles and the functional models.
S107, compiling a model simulation debugging tool and deploying the model simulation debugging tool in a MOM system.
3. The method for co-scheduling and tracking mapping of MOM system global resources according to claim 2, wherein in S103, dynamically allocating resources to be allocated specifically includes:
Acquiring computing resources from the virtual resource pool by using cloud services, and dynamically distributing the resources in the virtual resource pool according to the requirements of users until each cloud service points to a unique definition set; after the allocated resources are released by the user again, the virtual resource pool is returned for reallocation and reuse.
4. The method for co-scheduling and tracking mapping of MOM system global resources according to claim 2, wherein S2 specifically comprises:
s201, registering a multi-model simulation debugging tool, and carrying out validity verification by using a comprehensive matching degree evaluation algorithm;
s202, matching construction tool bags of different programming languages to realize multi-language integrated construction;
S203, automating and optimizing the compiling process through workflow task scheduling;
S204, constructing mirror image management construction environment and dependence by adopting codes, so as to accelerate construction speed;
s205, realizing layered heterogeneous deployment by adopting a containerized workflow task scheduling deployment mode;
s206, adopting multi-mode release of codes to adapt to different deployment scenes and requirements;
S207, optimizing resource allocation and task scheduling by using a multi-mode partition EDF algorithm, and realizing resource binding with an integrated scheduling framework through a RESTful API interface.
5. The method for co-scheduling and tracking mapping of global resources of MOM system according to claim 4, wherein in S201, the implementation of the comprehensive matching degree evaluation algorithm is specifically as follows:
defining T as a task set to be scheduled, M as a schedulable computation core set, G as a topological semantic graph set, B r as a subset semantic element in the semantic graph set, and B r epsilon G;
Defining a task to be scheduled as u, defining resource equipment as v, and constructing a decision pair < u, v >;
The data requirement defining the task to be scheduled u is D u:
Du=[du1,du2,...,duk]
Wherein k is the number of devices of the MOM system, and d uk is the distribution condition of data required by the execution of the task u to be scheduled in the kth device;
The computing speed of the task u to be scheduled on the computing resource is defined as Q u:
Qu=[qu1,qu2,...,qup]
wherein p is the number of computing resource types, and q up is the computing speed of the task u to be scheduled on the p-th type resource;
Defining the maximum occupied core number of the task u to be scheduled on the computing resource as P u:
Pu=[pu1,pu2,...,pup]
wherein, p up is expressed as the maximum occupied core number of the task u to be scheduled on the p-type resource;
The operation times of various computing resources owned by the resource device v are defined as C v:
Cv=[cv1,cv2,...,cvp]
Wherein c vp represents the operation times of the resource device v in the p-th type resource;
Defining the core number of various computing resources owned by the resource device v as E v:
Ev=[ev1,ev2,...,evp]
Wherein e vp is expressed as the core number of the resource device v in the p-type resource;
Circularly traversing the schedulable computing core set M to obtain a decision pair left node u, and returning a corresponding core quantity value E used; sub-circularly traversing the task set T to be scheduled to obtain a decision pair right node v;
next, the optimal bandwidth allocation vector B is solved using the optimal bandwidth allocation algorithm allocOptBW as follows:
B←allocOptBW(u,v,Br)
the calculation time T cP for all combinations is calculated as follows:
Tcp(u,v)←Σ(Qu/(Pu*Cv))
According to the ratio of the data requirement of the task to be scheduled u to the obtained optimal bandwidth allocation vector B, the maximum communication time T cm in all combinations is calculated as follows:
the resource utilization variance σ is calculated as follows:
σ(u,v)←Var((Pu+Eused)/Ev)
Each decision pair is given a priority W tk according to the commit time T sb (u) precedence order of each task, as follows:
Wtk(u,v)←Tsb(u)
normalizing T cP、Tcm、σ、Wtk to obtain a resource matching degree R, a load balancing degree L and a task fairness F respectively, wherein the resource matching degree R, the load balancing degree L and the task fairness F are as follows:
Calculating a comprehensive matching degree evaluation matrix I, wherein alpha is the relative weight of the resource matching degree, beta is the relative weight of the load balancing degree, and the relative weight of the gamma task fairness is calculated according to the following formula:
I←αR+βL+γF
Finally, setting a null decision set omega, traversing the comprehensive matching degree evaluation matrix I, and judging whether decision pairs < u, v > conflict with the final decision set omega in sequence from large to small; the conflict conditions include that tasks are migrated, resource cores are allocated, resource conflicts and the like, if no conflict exists, the decision pair is added into a decision set omega, and the set is a scheduling decision of an algorithm and determines the assembly sequence and the minimum path of the functional components.
6. The method for co-scheduling and tracking mapping of global resources of MOM system as claimed in claim 4, wherein in S207, the multi-modal partitioned EDF algorithm is used for the calculation of the basic time unit of the application scheduling process, and the formula is as follows:
Wherein I represents a parent interval sequence, j represents a child interval sequence of the parent interval, k represents a total length of the parent interval, d max is a maximum relative deadline of all messages, I j is an ith parent interval, b is a base of a child interval exponential function, a is time complexity of system matching, and U i,t represents a jth child interval basic time unit of the ith parent interval;
The calculation steps are as follows:
Assuming that the messages are released at the time 0, the maximum relative deadline of all the messages is d max, firstly dividing the messages into k+1 mother intervals { i 1,i2,...,ik,ik+1 } according to an exponential function partitioning method, wherein the range length of the i-th mother interval is Then, dividing each parent interval I j into q+1 subintervals according to the exponential function partitioning method, wherein the range of the q+1 subintervals of the ith parent interval isAnd finally, arranging according to the maximum relative cut-off time from small to large, scheduling priorities for the tasks, reasonably calling resources according to the sequence, and optimizing resource allocation.
7. The method for co-scheduling and tracking mapping of MOM system global resources according to claim 1, wherein S3 specifically comprises:
s301, the integration framework calls a multi-model simulation debugging tool, and the tool determines a task scheduling sequence by using a predefined function;
s302, calculating the priority weight of each task;
S303, sorting tasks in descending order according to priority weights;
s304, initializing a free resource list;
S305, for each task, distributing according to a preset rule;
s306, the priority weights of the unassigned tasks are recalculated, and then the assignment of S305 is repeated to complete all the tasks.
8. The method for co-scheduling and tracking mapping of MOM system global resources according to claim 7, wherein S305 specifically comprises:
If the resource r i required by the task i is smaller than or equal to the minimum resource amount in the current idle resource list A, the task i is allocated to the minimum resource in the available resources, and the allocated resource is removed from the A; otherwise, consider skip task i, proceed to the next task.
CN202410520593.8A 2024-04-28 MOM system global resource collaborative scheduling and tracking mapping method Pending CN118277273A (en)

Publications (1)

Publication Number Publication Date
CN118277273A true CN118277273A (en) 2024-07-02

Family

ID=

Similar Documents

Publication Publication Date Title
Taylor Distributed simulation: state-of-the-art and potential for operational research
CN111932027A (en) Cloud service comprehensive scheduling optimization system and method fusing edge facilities
CN101373432B (en) Method and system for predicting component system performance based on intermediate part
CN101946257A (en) Modelling computer based business process and simulating operation
Magoulès et al. Cloud computing: Data-intensive computing and scheduling
Pilla et al. A multivariate adaptive regression splines cutting plane approach for solving a two-stage stochastic programming fleet assignment model
Teng et al. Simmapreduce: A simulator for modeling mapreduce framework
Christidis et al. Enabling serverless deployment of large-scale ai workloads
CN111967656A (en) Resource scheduling method and system for multi-disaster-point emergency rescue command and control organization
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
Saravanan et al. Enhancing investigations in data migration and security using sequence cover cat and cover particle swarm optimization in the fog paradigm
CN102253974B (en) Dynamic combination method for geographic model network services
CN106845746A (en) A kind of cloud Workflow Management System for supporting extensive example intensive applications
Tchernykh et al. Mitigating uncertainty in developing and applying scientific applications in an integrated computing environment
US10635492B2 (en) Leveraging shared work to enhance job performance across analytics platforms
Graham Real-time scheduling in distributed multi agent systems
CN118277273A (en) MOM system global resource collaborative scheduling and tracking mapping method
Herrera et al. A simulator for intelligent workload managers in heterogeneous clusters
Braune et al. Applying genetic algorithms to the optimization of production planning in a real-world manufacturing environment
Feoktistov et al. Toolkit for simulation modeling of queue systems in Grid.
CN115437763B (en) Task reconstruction method and device after rocket fault, terminal equipment and storage medium
WO2023207630A1 (en) Task solving method and apparatus therefor
Velarde Martínez Parallel Task Graphs Scheduling Based on the Internal Structure
Mrabet et al. A clustering allocation and scheduling analysis approach for multiprocessor dependent real-time tasks
Martınez Parallel Task Graphs Scheduling Based on the Internal Structure

Legal Events

Date Code Title Description
PB01 Publication