CN117170873B - Resource pool management method and system based on artificial intelligence - Google Patents

Resource pool management method and system based on artificial intelligence Download PDF

Info

Publication number
CN117170873B
CN117170873B CN202311178495.2A CN202311178495A CN117170873B CN 117170873 B CN117170873 B CN 117170873B CN 202311178495 A CN202311178495 A CN 202311178495A CN 117170873 B CN117170873 B CN 117170873B
Authority
CN
China
Prior art keywords
task
resource pool
computing resource
processed
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311178495.2A
Other languages
Chinese (zh)
Other versions
CN117170873A (en
Inventor
杨建仁
陈家劲
杨慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Clouddcs Co ltd
Original Assignee
Guangzhou Clouddcs Co ltd
Filing date
Publication date
Application filed by Guangzhou Clouddcs Co ltd filed Critical Guangzhou Clouddcs Co ltd
Priority to CN202311178495.2A priority Critical patent/CN117170873B/en
Publication of CN117170873A publication Critical patent/CN117170873A/en
Application granted granted Critical
Publication of CN117170873B publication Critical patent/CN117170873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a resource pool management method and a system based on artificial intelligence, wherein the method comprises the following steps: acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes; screening a plurality of target computing resource pools from a candidate computing resource pool set based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters; determining processing strategies of the plurality of tasks to be processed based on a dynamic programming algorithm according to task parameters and processing requirements of each task to be processed and computing resource parameters of each target computing resource pool; and sending the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing strategy. Therefore, the method and the device can effectively combine the advantages of the algorithm to improve the rationality of task allocation so as to improve the processing efficiency and the processing effect of the cloud computing task.

Description

Resource pool management method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of cloud computing, in particular to a resource pool management method and system based on artificial intelligence.
Background
With development of cloud computing technology, more and more cloud services need to be brought into multiple computing resource pools to improve processing efficiency of cloud computing tasks, and how to effectively plan allocation and compatibility between the cloud computing tasks and the computing resource pools becomes an important technical problem. In the prior art, when such demands are processed, task distribution processing is generally performed by simply considering task parameters or according to the time sequence of task release, and the advantages of the neural network and the dynamic programming algorithm are not further combined. It can be seen that the prior art has defects and needs to be solved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a resource pool management method and a resource pool management system based on artificial intelligence, which can effectively combine the advantages of algorithms to improve the rationality of task allocation so as to improve the processing efficiency and the processing effect of cloud computing tasks.
In order to solve the technical problem, the first aspect of the present invention discloses a resource pool management method based on artificial intelligence, which comprises the following steps:
acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes;
screening a plurality of target computing resource pools from a candidate computing resource pool set based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters;
Determining processing strategies of the plurality of tasks to be processed based on a dynamic programming algorithm according to task parameters and processing requirements of each task to be processed and computing resource parameters of each target computing resource pool; the processing strategy is used for limiting the processing sequence and the processing resource pool corresponding to the plurality of tasks to be processed;
And sending the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing strategy.
As an optional implementation manner, in the first aspect of the present invention, the task parameter includes at least one of a task type, an amount of data to be processed by the task, and a device condition required for executing the task.
As an optional implementation manner, in the first aspect of the present invention, the processing requirement includes at least one of a processing time requirement, a processing cycle number requirement, and a processing result accuracy requirement.
As an optional implementation manner, in the first aspect of the present invention, the selecting, based on a neural network algorithm, a plurality of target computing resource pools from a candidate computing resource pool set according to a historical computing record of each task initiating node and the task parameters includes:
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
Calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of the plurality of tasks to be processed, the task conveying records and the data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of the suitability parameters between the candidate computing resource pool and all the task initiating nodes to obtain a priority parameter corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the record of the task completion result received by the task initiating node in a preset historical time period and the delivery condition is honored to the total record quantity;
Sequencing all the candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
And determining the first preset number of candidate computing resource pools of the resource pool sequence as a plurality of target computing resource pools.
As an optional implementation manner, in the first aspect of the present invention, the calculating, according to the task parameters of the plurality of tasks to be processed, the task delivery record and the data communication record, the suitability parameters between each of the task initiation nodes and any one of the candidate computing resource pools based on a neural network algorithm includes:
Processing task parameters of the plurality of tasks to be processed into a data set;
Calculating the similarity between the task parameter and the data set in each record in the task delivery records between any task initiating node and any candidate computing resource pool, and deleting the record with the similarity smaller than a preset similarity threshold from the task delivery records;
Inputting the task conveying record into a trained first suitability prediction neural network to obtain a first suitability parameter between the task initiating node and the candidate computing resource pool; the first suitability prediction neural network transmits a training data set comprising a plurality of training task transmission records and corresponding suitability labels;
Inputting the data communication record into a trained second suitability prediction neural network to obtain a second suitability parameter between the task initiating node and the candidate computing resource pool; the second suitability prediction neural network is used for transmitting a training data set comprising a plurality of training data communication records and corresponding suitability labels;
And calculating a weighted sum average value of the first suitability parameter and the second suitability parameter to obtain the suitability parameter between the task initiating node and the candidate computing resource pool.
As an optional implementation manner, in the first aspect of the present invention, the determining, based on a dynamic programming algorithm, a processing policy of the plurality of tasks to be processed according to a task parameter and a processing requirement of each task to be processed, and a computing resource parameter of each target computing resource pool includes:
Acquiring processor parameters and processor work records corresponding to each target computing resource pool; the processor parameters comprise the number of processors, the processor architecture, the processor type and the communication mode among the processors; the processor work records comprise work energy consumption records of the processors and communication records among the processors;
Establishing a dynamic planning calculation model according to the task parameters of each task to be processed and the processor parameters corresponding to each target calculation resource pool;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the working record of a processor corresponding to each target calculation resource pool;
And calculating the dynamic programming calculation model according to the objective function and the limiting condition to obtain an optimal processing strategy corresponding to the plurality of tasks to be processed.
In an optional implementation manner, in a first aspect of the present invention, the determining, according to a processing requirement of each task to be processed and a processor work record corresponding to each target computing resource pool, an objective function and a constraint corresponding to the dynamic programming computing model includes:
Inputting the processor work record of each target computing resource pool into a trained processor efficiency prediction model to obtain processor efficiency parameters corresponding to each target computing resource pool; the processor efficiency prediction model is obtained through training of a plurality of training processor work records and corresponding training data sets of efficiency labels;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the processor efficiency parameter corresponding to each objective calculation resource pool; the objective function includes that the sum of the completion times of all the tasks to be processed in the processing strategy is minimum, and the sum of the products between the inverse of the task completion order of all the tasks to be processed and the processor efficiency of the corresponding completed target computing resource pool is maximum.
The second aspect of the invention discloses a resource pool management system based on artificial intelligence, which comprises:
The acquisition module is used for acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes;
the screening module is used for screening a plurality of target computing resource pools from the candidate computing resource pool sets based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters;
The determining module is used for determining the processing strategies of the plurality of tasks to be processed based on a dynamic programming algorithm according to the task parameters and the processing requirements of each task to be processed and the computing resource parameters of each target computing resource pool; the processing strategy is used for limiting the processing sequence and the processing resource pool corresponding to the plurality of tasks to be processed;
And the execution module is used for sending the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing strategy.
As an optional implementation manner, in the second aspect of the present invention, the task parameter includes at least one of a task type, an amount of data to be processed by the task, and a device condition required for task execution.
As an alternative embodiment, in the second aspect of the present invention, the processing requirement includes at least one of a processing time requirement, a processing cycle number requirement, and a processing result accuracy requirement.
As an optional implementation manner, in the second aspect of the present invention, the specific manner in which the screening module screens, according to the historical calculation record of each task initiating node and the task parameters, a plurality of target computing resource pools from the candidate computing resource pool set based on a neural network algorithm includes:
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
Calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of the plurality of tasks to be processed, the task conveying records and the data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of the suitability parameters between the candidate computing resource pool and all the task initiating nodes to obtain a priority parameter corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the record of the task completion result received by the task initiating node in a preset historical time period and the delivery condition is honored to the total record quantity;
Sequencing all the candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
And determining the first preset number of candidate computing resource pools of the resource pool sequence as a plurality of target computing resource pools.
As an optional implementation manner, in the second aspect of the present invention, the specific manner of calculating, by the screening module according to the task parameters of the plurality of tasks to be processed, the task delivery record and the data communication record, the suitability parameters between each of the task initiation nodes and any of the candidate computing resource pools based on a neural network algorithm includes:
Processing task parameters of the plurality of tasks to be processed into a data set;
Calculating the similarity between the task parameter and the data set in each record in the task delivery records between any task initiating node and any candidate computing resource pool, and deleting the record with the similarity smaller than a preset similarity threshold from the task delivery records;
Inputting the task conveying record into a trained first suitability prediction neural network to obtain a first suitability parameter between the task initiating node and the candidate computing resource pool; the first suitability prediction neural network transmits a training data set comprising a plurality of training task transmission records and corresponding suitability labels;
Inputting the data communication record into a trained second suitability prediction neural network to obtain a second suitability parameter between the task initiating node and the candidate computing resource pool; the second suitability prediction neural network is used for transmitting a training data set comprising a plurality of training data communication records and corresponding suitability labels;
And calculating a weighted sum average value of the first suitability parameter and the second suitability parameter to obtain the suitability parameter between the task initiating node and the candidate computing resource pool.
As an optional implementation manner, in the second aspect of the present invention, the determining module determines, based on a dynamic programming algorithm, a specific manner of a processing policy of the plurality of tasks to be processed according to a task parameter and a processing requirement of each task to be processed, and a computing resource parameter of each target computing resource pool, where the determining module includes:
Acquiring processor parameters and processor work records corresponding to each target computing resource pool; the processor parameters comprise the number of processors, the processor architecture, the processor type and the communication mode among the processors; the processor work records comprise work energy consumption records of the processors and communication records among the processors;
Establishing a dynamic planning calculation model according to the task parameters of each task to be processed and the processor parameters corresponding to each target calculation resource pool;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the working record of a processor corresponding to each target calculation resource pool;
And calculating the dynamic programming calculation model according to the objective function and the limiting condition to obtain an optimal processing strategy corresponding to the plurality of tasks to be processed.
In a second aspect of the present invention, the determining module determines, according to a processing requirement of each task to be processed and a processor work record corresponding to each target computing resource pool, a specific manner of an objective function and a constraint condition corresponding to the dynamic programming computing model, where the specific manner includes:
Inputting the processor work record of each target computing resource pool into a trained processor efficiency prediction model to obtain processor efficiency parameters corresponding to each target computing resource pool; the processor efficiency prediction model is obtained through training of a plurality of training processor work records and corresponding training data sets of efficiency labels;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the processor efficiency parameter corresponding to each objective calculation resource pool; the objective function includes that the sum of the completion times of all the tasks to be processed in the processing strategy is minimum, and the sum of the products between the inverse of the task completion order of all the tasks to be processed and the processor efficiency of the corresponding completed target computing resource pool is maximum.
In a third aspect, the present invention discloses another resource pool management system based on artificial intelligence, the system comprising:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform some or all of the steps in the artificial intelligence based resource pool management method disclosed in the first aspect of the invention.
A fourth aspect of the invention discloses a computer storage medium storing computer instructions which, when invoked, are adapted to perform part or all of the steps of the artificial intelligence based resource pool management method disclosed in the first aspect of the invention.
Compared with the prior art, the invention has the following beneficial effects:
The method and the system can screen the resource pool by combining the historical calculation record of the task initiating node and realize calculation of the task strategy according to the dynamic planning algorithm, thereby effectively combining the advantages of the algorithm to improve the rationality of task allocation and further improve the processing efficiency and the processing effect of the cloud computing task.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an artificial intelligence-based resource pool management method disclosed in an embodiment of the invention;
FIG. 2 is a schematic diagram of an artificial intelligence based resource pool management system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another artificial intelligence based resource pool management system in accordance with an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a resource pool management method and a system based on artificial intelligence, which can screen a resource pool by combining a historical calculation record of a task initiating node and realize calculation of a task strategy according to a dynamic planning algorithm, so that the rationality of task allocation can be improved by effectively combining the advantages of the algorithm, and the processing efficiency and the processing effect of cloud computing tasks are improved. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of a resource pool management method based on artificial intelligence according to an embodiment of the present invention. The method described in fig. 1 may be applied to a corresponding data processing device, a data processing terminal, and a data processing server, where the server may be a local server or a cloud server, and the embodiment of the present invention is not limited to the method shown in fig. 1, and the method for managing a resource pool based on artificial intelligence may include the following operations:
101. And acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes.
Optionally, the task parameter includes at least one of a task type, an amount of data to be processed by the task, and a device condition required for task execution.
Optionally, the processing requirements include at least one of a processing time requirement, a processing cycle number requirement, and a processing result accuracy requirement.
102. And screening a plurality of target computing resource pools from the candidate computing resource pool sets based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters.
103. And determining the processing strategies of a plurality of tasks to be processed based on a dynamic programming algorithm according to the task parameters and the processing requirements of each task to be processed and the computing resource parameters of each target computing resource pool.
Specifically, the processing strategy is used for limiting the processing sequence and the processing resource pool corresponding to the plurality of tasks to be processed.
104. And sending the plurality of tasks to be processed to a plurality of target computing resource pools for execution according to the processing strategy.
Therefore, the method described by the embodiment of the invention can be combined with the historical calculation record of the task initiating node to screen the resource pool, and the calculation of the task strategy is realized according to the dynamic programming algorithm, so that the rationality of task allocation can be improved by effectively combining with the advantages of the algorithm, and the processing efficiency and the processing effect of the cloud computing task are improved.
As an optional embodiment, in the step, according to the historical calculation record of each task initiating node and the task parameter, a plurality of target calculation resource pools are screened from the candidate calculation resource pool set based on a neural network algorithm, including:
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of a plurality of tasks to be processed, task conveying records and data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of suitability parameters between the candidate computing resource pool and all task initiating nodes to obtain priority parameters corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the records of the task initiating node receiving the task completion result and honoring the delivery conditions in the total record quantity in a preset historical time period;
Sequencing all candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
and determining the first preset number of candidate computing resource pools of the resource pool sequence as a plurality of target computing resource pools.
Through the embodiment, the target computing resource pool can be screened out by calculating the suitability parameter and the priority parameter, so that reasonable and efficient computing resource pool screening can be realized, and the rationality of task allocation can be improved by effectively combining the algorithm advantages in the follow-up process, so that the processing efficiency and the processing effect of the cloud computing task are improved.
As an optional embodiment, in the step, calculating, based on the neural network algorithm, an suitability parameter between each task initiating node and any candidate computing resource pool according to task parameters of a plurality of tasks to be processed, and the task delivery record and the data communication record, includes:
Processing task parameters of a plurality of tasks to be processed into a data set;
for any task initiating node and any candidate computing resource pool, calculating the similarity between task parameters and a data set in each record in task conveying records between the task initiating node and the candidate computing resource pool, and deleting records with similarity smaller than a preset similarity threshold from the task conveying records;
Inputting the task conveying record into a trained first adaptability prediction neural network to obtain a first adaptability parameter between the task initiating node and the candidate computing resource pool; the first suitability prediction neural network transmits a training data set comprising a plurality of training task conveying records and corresponding suitability labels;
Inputting the data communication record into a trained second suitability prediction neural network to obtain a second suitability parameter between the task initiating node and the candidate computing resource pool; the second suitability prediction neural network is used for transmitting a training data set comprising a plurality of training data communication records and corresponding suitability labels;
and calculating a weighted sum average value of the first suitability parameter and the second suitability parameter to obtain the suitability parameter between the task initiating node and the candidate computing resource pool.
Through the embodiment, the first suitability parameter and the second suitability parameter can be calculated by combining a neural network algorithm to be used for screening the target computing resource pool, so that reasonable and efficient computing resource pool screening can be realized, and the rationality of task allocation can be improved by effectively combining algorithm advantages in the follow-up process, so that the processing efficiency and the processing effect of the cloud computing task are improved.
As an optional embodiment, in the step, according to the task parameter and the processing requirement of each task to be processed and the computing resource parameter of each target computing resource pool, determining the processing policy of the plurality of tasks to be processed based on the dynamic programming algorithm includes:
Acquiring processor parameters and processor work records corresponding to each target computing resource pool; the processor parameters include the number of processors, the processor architecture, the processor type and the communication mode among the processors; the processor work records comprise work energy consumption records of the processors and communication records among the processors;
according to the task parameters of each task to be processed and the processor parameters corresponding to each target computing resource pool, a dynamic programming computing model is established;
Determining an objective function and a limiting condition corresponding to a dynamic programming calculation model according to the processing requirement of each task to be processed and the working record of a processor corresponding to each target calculation resource pool;
And calculating the dynamic programming calculation model according to the objective function and the limiting condition to obtain an optimal processing strategy corresponding to the plurality of tasks to be processed.
Through the embodiment, the optimal processing strategies corresponding to the plurality of tasks to be processed can be calculated by combining the dynamic programming algorithm with the processor parameters and the processor work records corresponding to the target computing resource pool, so that the determination of the processing strategies can be realized, the algorithm advantages can be effectively combined to improve the rationality of task allocation, and the processing efficiency and the processing effect of the cloud computing tasks are improved.
As an optional embodiment, in the step, determining, according to the processing requirement of each task to be processed and the processor work record corresponding to each target computing resource pool, an objective function and a constraint condition corresponding to the dynamic programming computing model includes:
inputting the processor work record of each target computing resource pool into a trained processor efficiency prediction model to obtain processor efficiency parameters corresponding to each target computing resource pool; the processor efficiency prediction model is obtained through training of a plurality of training processor work records and corresponding training data sets of efficiency labels;
Determining an objective function and a limiting condition corresponding to a dynamic programming calculation model according to the processing requirement of each task to be processed and the processor efficiency parameter corresponding to each target calculation resource pool; the objective function includes a minimum sum of completion times for all of the tasks to be processed in the processing policy, and a maximum sum of products between the inverse of the task completion order for all of the tasks to be processed and the processor efficiency of the corresponding completed target computing resource pool.
Through the embodiment, the objective function and the limiting condition corresponding to the dynamic programming calculation model can be determined by combining the prediction algorithm and the processor work record of each target calculation resource pool, so that the dynamic programming calculation model can effectively combine the advantages of the algorithm to improve the rationality of task allocation, and the processing efficiency and the processing effect of the cloud calculation task are improved.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of an artificial intelligence-based resource pool management system according to an embodiment of the present invention. The system described in fig. 2 may be applied to a corresponding data processing device, a data processing terminal, and a data processing server, where the server may be a local server or a cloud server, and embodiments of the present invention are not limited. As shown in fig. 2, the system may include:
an obtaining module 201, configured to obtain task parameters and processing requirements of a plurality of tasks to be processed sent by a plurality of task initiation nodes;
A screening module 202, configured to screen a plurality of target computing resource pools from the candidate computing resource pool set based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters;
The determining module 203 is configured to determine a processing policy of a plurality of tasks to be processed based on a dynamic planning algorithm according to a task parameter and a processing requirement of each task to be processed and a computing resource parameter of each target computing resource pool; the processing strategy is used for limiting a processing sequence and a processing resource pool corresponding to the plurality of tasks to be processed;
and the execution module 204 is configured to send the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing policy.
As an alternative embodiment, the task parameters include at least one of a task type, an amount of data to be processed by the task, and a device condition required for task execution.
As an alternative embodiment, the processing requirements include at least one of a processing time requirement, a processing cycle number requirement, and a processing result accuracy requirement.
As an alternative embodiment, the filtering module 202 filters a plurality of target computing resource pools from the candidate computing resource pool set based on a neural network algorithm according to the historical computing record of each task initiating node and the task parameters, including:
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of a plurality of tasks to be processed, task conveying records and data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of suitability parameters between the candidate computing resource pool and all task initiating nodes to obtain priority parameters corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the records of the task initiating node receiving the task completion result and honoring the delivery conditions in the total record quantity in a preset historical time period;
Sequencing all candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
and determining the first preset number of candidate computing resource pools of the resource pool sequence as a plurality of target computing resource pools.
As an alternative embodiment, the specific manner of calculating the suitability parameter between each task initiating node and any candidate computing resource pool by the screening module 202 according to the task parameters of the plurality of tasks to be processed, the task delivery record and the data communication record based on the neural network algorithm includes:
Processing task parameters of a plurality of tasks to be processed into a data set;
for any task initiating node and any candidate computing resource pool, calculating the similarity between task parameters and a data set in each record in task conveying records between the task initiating node and the candidate computing resource pool, and deleting records with similarity smaller than a preset similarity threshold from the task conveying records;
Inputting the task conveying record into a trained first adaptability prediction neural network to obtain a first adaptability parameter between the task initiating node and the candidate computing resource pool; the first suitability prediction neural network transmits a training data set comprising a plurality of training task conveying records and corresponding suitability labels;
Inputting the data communication record into a trained second suitability prediction neural network to obtain a second suitability parameter between the task initiating node and the candidate computing resource pool; the second suitability prediction neural network is used for transmitting a training data set comprising a plurality of training data communication records and corresponding suitability labels;
and calculating a weighted sum average value of the first suitability parameter and the second suitability parameter to obtain the suitability parameter between the task initiating node and the candidate computing resource pool.
As an alternative embodiment, the determining module 203 determines, based on a dynamic programming algorithm, a specific manner of processing policies of a plurality of tasks to be processed according to task parameters and processing requirements of each task to be processed and computing resource parameters of each target computing resource pool, including:
Acquiring processor parameters and processor work records corresponding to each target computing resource pool; the processor parameters include the number of processors, the processor architecture, the processor type and the communication mode among the processors; the processor work records comprise work energy consumption records of the processors and communication records among the processors;
according to the task parameters of each task to be processed and the processor parameters corresponding to each target computing resource pool, a dynamic programming computing model is established;
Determining an objective function and a limiting condition corresponding to a dynamic programming calculation model according to the processing requirement of each task to be processed and the working record of a processor corresponding to each target calculation resource pool;
And calculating the dynamic programming calculation model according to the objective function and the limiting condition to obtain an optimal processing strategy corresponding to the plurality of tasks to be processed.
As an optional embodiment, the determining module 203 determines, according to the processing requirement of each task to be processed and the processor work record corresponding to each target computing resource pool, a specific manner of dynamically planning an objective function and a constraint condition corresponding to a computing model, where the specific manner includes:
inputting the processor work record of each target computing resource pool into a trained processor efficiency prediction model to obtain processor efficiency parameters corresponding to each target computing resource pool; the processor efficiency prediction model is obtained through training of a plurality of training processor work records and corresponding training data sets of efficiency labels;
Determining an objective function and a limiting condition corresponding to a dynamic programming calculation model according to the processing requirement of each task to be processed and the processor efficiency parameter corresponding to each target calculation resource pool; the objective function includes a minimum sum of completion times for all of the tasks to be processed in the processing policy, and a maximum sum of products between the inverse of the task completion order for all of the tasks to be processed and the processor efficiency of the corresponding completed target computing resource pool.
In particular, the technical details or technical effects of the foregoing embodiments or modules may be referred to the description in the first embodiment, and are not repeated herein.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of another resource pool management system based on artificial intelligence according to an embodiment of the present invention. As shown in fig. 3, the system may include:
a memory 301 storing executable program code;
a processor 302 coupled with the memory 301;
The processor 302 invokes the executable program code stored in the memory 301 to perform some or all of the steps in the artificial intelligence based resource pool management method disclosed in accordance with the embodiment of the present invention.
Example IV
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing part or all of the steps in the artificial intelligence-based resource pool management method disclosed in the embodiment of the invention when the computer instructions are called.
The system embodiments described above are merely illustrative, in which the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a resource pool management method and system based on artificial intelligence, which are disclosed by the embodiment of the invention only for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. An artificial intelligence-based resource pool management method, comprising:
acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes;
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
Calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of the plurality of tasks to be processed, the task conveying records and the data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of the suitability parameters between the candidate computing resource pool and all the task initiating nodes to obtain a priority parameter corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the record of the task completion result received by the task initiating node in a preset historical time period and the delivery condition is honored to the total record quantity;
Sequencing all the candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
Determining a plurality of candidate computing resource pools of the preset number of the resource pool sequences as a plurality of target computing resource pools;
Determining processing strategies of the plurality of tasks to be processed based on a dynamic programming algorithm according to task parameters and processing requirements of each task to be processed and computing resource parameters of each target computing resource pool; the processing strategy is used for limiting the processing sequence and the processing resource pool corresponding to the plurality of tasks to be processed;
And sending the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing strategy.
2. The artificial intelligence based resource pool management method of claim 1, wherein the task parameters include at least one of task type, amount of data to be processed by the task, and equipment conditions required for task execution.
3. The artificial intelligence based resource pool management method of claim 1, wherein the processing requirements include at least one of processing time requirements, processing cycle times requirements, processing result accuracy requirements.
4. The artificial intelligence based resource pool management method of claim 1, wherein the calculating the suitability parameter between each of the task initiation nodes and any of the candidate computing resource pools based on the neural network algorithm based on the task parameters of the plurality of tasks to be processed, the task delivery record and the data communication record, comprises:
Processing task parameters of the plurality of tasks to be processed into a data set;
Calculating the similarity between the task parameter and the data set in each record in the task delivery records between any task initiating node and any candidate computing resource pool, and deleting the record with the similarity smaller than a preset similarity threshold from the task delivery records;
Inputting the task conveying record into a trained first suitability prediction neural network to obtain a first suitability parameter between the task initiating node and the candidate computing resource pool; the first suitability prediction neural network transmits a training data set comprising a plurality of training task transmission records and corresponding suitability labels;
Inputting the data communication record into a trained second suitability prediction neural network to obtain a second suitability parameter between the task initiating node and the candidate computing resource pool; the second suitability prediction neural network is used for transmitting a training data set comprising a plurality of training data communication records and corresponding suitability labels;
And calculating a weighted sum average value of the first suitability parameter and the second suitability parameter to obtain the suitability parameter between the task initiating node and the candidate computing resource pool.
5. The method according to claim 1, wherein determining the processing policy of the plurality of tasks to be processed based on the dynamic programming algorithm according to the task parameters and the processing requirements of each task to be processed and the computing resource parameters of each target computing resource pool comprises:
Acquiring processor parameters and processor work records corresponding to each target computing resource pool; the processor parameters comprise the number of processors, the processor architecture, the processor type and the communication mode among the processors; the processor work records comprise work energy consumption records of the processors and communication records among the processors;
Establishing a dynamic planning calculation model according to the task parameters of each task to be processed and the processor parameters corresponding to each target calculation resource pool;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the working record of a processor corresponding to each target calculation resource pool;
And calculating the dynamic programming calculation model according to the objective function and the limiting condition to obtain an optimal processing strategy corresponding to the plurality of tasks to be processed.
6. The method for resource pool management based on artificial intelligence according to claim 5, wherein determining the objective function and the constraint corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the processor work record corresponding to each target calculation resource pool comprises:
Inputting the processor work record of each target computing resource pool into a trained processor efficiency prediction model to obtain processor efficiency parameters corresponding to each target computing resource pool; the processor efficiency prediction model is obtained through training of a plurality of training processor work records and corresponding training data sets of efficiency labels;
Determining an objective function and a limiting condition corresponding to the dynamic programming calculation model according to the processing requirement of each task to be processed and the processor efficiency parameter corresponding to each objective calculation resource pool; the objective function includes that the sum of the completion times of all the tasks to be processed in the processing strategy is minimum, and the sum of the products between the inverse of the task completion order of all the tasks to be processed and the processor efficiency of the corresponding completed target computing resource pool is maximum.
7. An artificial intelligence based resource pool management system, the system comprising:
The acquisition module is used for acquiring task parameters and processing requirements of a plurality of tasks to be processed, which are sent by a plurality of task initiating nodes;
The screening module is configured to screen a plurality of target computing resource pools from a candidate computing resource pool set based on a neural network algorithm according to a historical computing record of each task initiating node and the task parameters, and specifically includes:
according to the historical calculation record of each task initiating node, calculating the task conveying record and the data communication record between any task initiating node and any candidate calculation resource pool;
Calculating suitability parameters between each task initiating node and any candidate computing resource pool based on a neural network algorithm according to task parameters of the plurality of tasks to be processed, the task conveying records and the data communication records;
For any candidate computing resource pool, calculating a weighted sum average value of the suitability parameters between the candidate computing resource pool and all the task initiating nodes to obtain a priority parameter corresponding to the candidate computing resource pool; wherein the weight of each suitability parameter comprises a first weight and a second weight; the first weight is in direct proportion to the total record quantity of the task conveying records and the data communication records of the corresponding task initiating node; the second weight is in direct proportion to the historical task result receiving rate of the task initiating node; the historical task result receiving and transmitting rate is the proportion of the record of the task completion result received by the task initiating node in a preset historical time period and the delivery condition is honored to the total record quantity;
Sequencing all the candidate computing resource pools from large to small according to the priority parameters to obtain a resource pool sequence;
Determining a plurality of candidate computing resource pools of the preset number of the resource pool sequences as a plurality of target computing resource pools;
The determining module is used for determining the processing strategies of the plurality of tasks to be processed based on a dynamic programming algorithm according to the task parameters and the processing requirements of each task to be processed and the computing resource parameters of each target computing resource pool; the processing strategy is used for limiting the processing sequence and the processing resource pool corresponding to the plurality of tasks to be processed;
And the execution module is used for sending the plurality of tasks to be processed to the plurality of target computing resource pools for execution according to the processing strategy.
8. An artificial intelligence based resource pool management system, the system comprising:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the artificial intelligence based resource pool management method of any one of claims 1-6.
9. A computer storage medium storing computer instructions which, when invoked, are operable to perform the artificial intelligence based resource pool management method of any one of claims 1-6.
CN202311178495.2A 2023-09-12 Resource pool management method and system based on artificial intelligence Active CN117170873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311178495.2A CN117170873B (en) 2023-09-12 Resource pool management method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311178495.2A CN117170873B (en) 2023-09-12 Resource pool management method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN117170873A CN117170873A (en) 2023-12-05
CN117170873B true CN117170873B (en) 2024-06-07

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917812A (en) * 2015-04-16 2015-09-16 西安交通大学 Service node selection method applied to group intelligence calculation
CN111176846A (en) * 2019-12-30 2020-05-19 云知声智能科技股份有限公司 Task allocation method and device
CN111625331A (en) * 2020-05-20 2020-09-04 拉扎斯网络科技(上海)有限公司 Task scheduling method, device, platform, server and storage medium
CN115080212A (en) * 2022-06-30 2022-09-20 上海明胜品智人工智能科技有限公司 Task scheduling method, device, equipment and storage medium
CN116166395A (en) * 2022-12-05 2023-05-26 北京火山引擎科技有限公司 Task scheduling method, device, medium and electronic equipment
CN116521344A (en) * 2023-05-12 2023-08-01 广州卓勤信息技术有限公司 AI algorithm scheduling method and system based on resource bus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917812A (en) * 2015-04-16 2015-09-16 西安交通大学 Service node selection method applied to group intelligence calculation
CN111176846A (en) * 2019-12-30 2020-05-19 云知声智能科技股份有限公司 Task allocation method and device
CN111625331A (en) * 2020-05-20 2020-09-04 拉扎斯网络科技(上海)有限公司 Task scheduling method, device, platform, server and storage medium
CN115080212A (en) * 2022-06-30 2022-09-20 上海明胜品智人工智能科技有限公司 Task scheduling method, device, equipment and storage medium
CN116166395A (en) * 2022-12-05 2023-05-26 北京火山引擎科技有限公司 Task scheduling method, device, medium and electronic equipment
CN116521344A (en) * 2023-05-12 2023-08-01 广州卓勤信息技术有限公司 AI algorithm scheduling method and system based on resource bus

Similar Documents

Publication Publication Date Title
CN111724037B (en) Method and device for allocating operation resources, computer equipment and readable storage medium
CN111985851B (en) Bank outlet resource scheduling method and device
CN107861811A (en) Mission bit stream transmission method, device and computer equipment in Workflow system
CN115543624A (en) Heterogeneous computing power arrangement scheduling method, system, equipment and storage medium
CN116777677A (en) Project data processing method and system based on project starting stage target setting
CN117154844A (en) Energy supply control method and device for energy storage system
CN117170873B (en) Resource pool management method and system based on artificial intelligence
CN117227177B (en) Multitasking printing control method and system based on equipment supervision
CN114595970A (en) Resource scheduling intelligent decision method and device, electronic equipment and storage medium
CN104331330A (en) Resource pool generation method and device
CN117062180B (en) Communication path selection method and device based on multiple Bluetooth networks
CN117170873A (en) Resource pool management method and system based on artificial intelligence
CN117311973A (en) Computing device scheduling method and device, nonvolatile storage medium and electronic device
CN110135627B (en) Water resource optimization method and device
CN109301820A (en) A kind of enterprise's electrical control method and system
CN113487183B (en) Method, device and storage medium for determining service resources in vertical service scene
CN115098252A (en) Resource scheduling method, device and computer readable medium
CN116521344B (en) AI algorithm scheduling method and system based on resource bus
CN117632905B (en) Database management method and system based on cloud use records
CN117207530B (en) 3D printing method and system based on multi-equipment linkage
CN116934041B (en) Internet of things collaboration-based factory task management method and system
CN113988769B (en) Method and device for intelligently matching distributed resources and computer equipment
CN117216505B (en) User habit prediction method and system based on smart home use record
CN104994136B (en) The data store optimization method and system of extensive community
Na et al. An adaptive replanning mechanism for dependable service-based systems

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant