CN113342510A - Water and power basin emergency command cloud-side computing resource cooperative processing method - Google Patents

Water and power basin emergency command cloud-side computing resource cooperative processing method Download PDF

Info

Publication number
CN113342510A
CN113342510A CN202110894257.6A CN202110894257A CN113342510A CN 113342510 A CN113342510 A CN 113342510A CN 202110894257 A CN202110894257 A CN 202110894257A CN 113342510 A CN113342510 A CN 113342510A
Authority
CN
China
Prior art keywords
cloud
computing
migration
task
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110894257.6A
Other languages
Chinese (zh)
Other versions
CN113342510B (en
Inventor
许剑
罗玮
王骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoneng Daduhe Big Data Service Co ltd
Original Assignee
Guoneng Daduhe Big Data Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoneng Daduhe Big Data Service Co ltd filed Critical Guoneng Daduhe Big Data Service Co ltd
Priority to CN202110894257.6A priority Critical patent/CN113342510B/en
Publication of CN113342510A publication Critical patent/CN113342510A/en
Application granted granted Critical
Publication of CN113342510B publication Critical patent/CN113342510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a hydropower basin emergency command cloud-side computing resource cooperative processing method, which relates to the technical field of cloud computing and adopts the technical scheme that: acquiring sensor monitoring data acquired by a power station end, and preprocessing the sensor monitoring data at a side cloud side to form a calculation task; if the time for locally executing the calculation is greater than the maximum allowable delay, sending a migration request of a corresponding calculation task to the central cloud; performing ant colony algorithm-based computing task queue optimization computing on all migration requests according to the current node resource load balancing condition of the central cloud and the network transmission time cost from the edge cloud node to the central cloud node to obtain an optimal allocation strategy; and the corresponding computing task responds to the optimal allocation strategy and then carries out computing task migration so as to realize cloud-side computing resource cooperative disposal. The method and the system give full play to the advantage of sufficient central cloud resources, and improve the overall computing efficiency of the platform from a software level.

Description

Water and power basin emergency command cloud-side computing resource cooperative processing method
Technical Field
The invention relates to the technical field of cloud computing, in particular to a coordinated processing method for computing resources at cloud side for emergency command of a hydropower domain.
Background
The traditional single application system is high in integration level and low in development, debugging and deployment difficulty, the problem of high single application complexity is continuously exposed along with the continuous increase of the requirements of functions and performance of the application system, the module design difficulty is high, the boundary division of each module is fuzzy, the dependence relationship is heavily coupled, and the module integration difficulty is increased rapidly. In order to ensure the reliability and high efficiency of the single application for effectively supporting the large application, the method can be only enhanced in a limited way by increasing servers, database cluster deployment, access load balancing and the like, which have high cost and weak effect. Traditional software process models such as a waterfall model, a rapid prototype, incremental development and a fountain model exist in the field of software engineering, the most common waterfall model takes at least several months from design, development to deployment, and any requirement change, regression test after error repair and redeployment which occur midway take several days to several weeks. The traditional centralized mode of cloud computing based on first transmission and then operation is difficult to guarantee basin-level water disaster comprehensive emergency command in a complex environment, and in the environment facing adverse factors such as complex geographical environment of basin and high gorges, long network transmission distance, unstable communication conditions and the like, the traditional centralized cloud computing model based on first transmission and then operation has the problems of low efficiency, large data delay and the like, and is difficult to effectively guarantee reliable transmission of water disaster data, rapid analysis of an early warning prevention and control model and smooth and convenient emergency command.
In a traditional monolithic application architecture, an application system includes all business functions, and the application itself is directly deployed as a huge deployment unit, and all the functions are deployed in the same process. When high concurrency is encountered, the traditional huge application faces the problem of expanding deployment. Because all functions are deployed together, even if only one of the functions has high concurrency requirements, the horizontal extension of the application system must be achieved by re-deploying the entire application system on other servers. Therefore, the requirements of dynamic elastic expansion on the calculation, storage and network resources of the platform are provided no matter short-time centralized large-scale calculation required by a basin-level weather forecast WRF mode, a cascade electric warfare group combined flood control dispatching model and the like, or the requirements of panoramic visualization and non-delay emergency command of the headquarters related to the basin plateau field, mountain hydropower stations and hundreds of kilometers away after a water disaster event occurs.
The cloud computing is a computing mode and a service mode which are based on an information network, information technology resources are dynamically and elastically provided in a service mode, and users can use the computing mode and the service mode as required. Since the rise of cloud computing, an application deployment mode is completed to migrate from a traditional physical machine to a cloud deployment of a virtual machine, an application development mode is changed from a traditional single application to a distributed service mode, an application system is changed from a stateful mode to a stateless mode, a back-end database cluster is subjected to data service, and a front-end application accesses data through an HTTP interface and displays service functions and inputs user instructions by using a self-adaptive display technology. Therefore, how to research and design a coordinated processing method for cloud-side computing resources for emergency command in hydropower regions, which can overcome the defects, is a problem which is urgently needed to be solved at present.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a coordinated processing method for cloud-side computing resources for emergency command of a hydropower basin.
The technical purpose of the invention is realized by the following technical scheme: a hydropower basin emergency command cloud-side computing resource cooperative processing method comprises the following steps:
acquiring sensor monitoring data acquired by a power station end, and preprocessing the sensor monitoring data at a side cloud side to form at least one computing task;
acquiring the maximum allowable delay of the calculation task, and calculating to obtain the time for the calculation task to execute calculation locally; if the time for locally executing the calculation is greater than the maximum allowable delay, sending a migration request of a corresponding calculation task to the central cloud;
performing ant colony algorithm-based computing task queue optimization computing on all migration requests according to the current node resource load balancing condition of the central cloud and the network transmission time cost from the edge cloud node to the central cloud node to obtain an optimal allocation strategy;
and the corresponding computing task responds to the optimal allocation strategy and then carries out computing task migration so as to realize cloud-side computing resource cooperative disposal.
Further, the central cloud receives the migration request and then generates record information for the corresponding computing migration task;
the recorded information comprises final execution place information, network bandwidth resources distributed by the center cloud for corresponding computing tasks, CPU computing resources distributed by the center cloud for corresponding computing tasks, data sizes of the computing tasks and maximum allowable delay of the computing tasks;
and the central cloud node synchronizes and updates the record information of the computation migration tasks under the central cloud, and after making a computation migration decision each time, the record information is synchronously changed and broadcasted to all nodes of the central cloud, and the records of all the computation migration tasks are integrated into a task record set.
Further, the optimization calculation process of the optimal allocation strategy specifically includes:
calculating according to network bandwidth resources distributed to the computing tasks by the central cloud nodes to obtain migration transmission delay of migration of the corresponding computing tasks to the corresponding central cloud nodes;
calculating according to the CPU resources distributed to the computing tasks by the central cloud nodes to obtain migration computing time of the corresponding computing tasks for executing computing on the corresponding central cloud nodes;
calculating to obtain the total migration delay of the migration of the corresponding calculation task according to the sum of the migration transmission delay and the migration calculation time;
and (4) considering the load balancing condition of each node of the central cloud, and performing optimized computation on the cloud-side computing resource efficiency by minimizing the execution completion time of all computing tasks to obtain the optimal allocation strategy of the cloud-side computing resources.
Further, the optimal calculation formula of the optimal allocation strategy specifically includes:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
the total migration delay representing task migration is minimum; n represents the number of ants; m represents the number of central cloud resources;
Figure DEST_PATH_IMAGE006
and representing the total migration delay of the computing task i to the central cloud node j.
Further, the optimal allocation strategy is optimally calculated by performing resource matching with a migration probability, wherein a calculation formula of the migration probability specifically includes:
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE010
representing the probability of migration of the computing task i to the central cloud node j;
Figure DEST_PATH_IMAGE012
represents the pheromone concentration on pathway (i, j);
Figure DEST_PATH_IMAGE014
represents the pheromone concentration on pathway (i, q);
Figure DEST_PATH_IMAGE016
representing the sensitivity of ants to pheromones;
Figure DEST_PATH_IMAGE018
representing the attraction level of the central cloud node j to ants on the computing task i, wherein the lower the migration cost is, the larger the value is;
Figure DEST_PATH_IMAGE020
representing node q pairs of ants on computational task iThe level of attraction;
Figure DEST_PATH_IMAGE022
representing the sensitivity of the ant colony to pheromones;
Figure DEST_PATH_IMAGE024
representing nodes that are not visited;
Figure DEST_PATH_IMAGE026
indicating the visited node.
Further, when the optimal allocation strategy is optimally calculated, the pheromone updating formula on the path specifically includes:
Figure DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE030
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t +1 is represented;
Figure DEST_PATH_IMAGE032
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t is represented;
Figure DEST_PATH_IMAGE034
expressing the volatilization coefficient of pheromone, and taking the value as
Figure DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE038
Represents the total amount of pheromone released by all ants on path (i, j);
Figure DEST_PATH_IMAGE040
representing the load condition of the corresponding central cloud node;
Figure DEST_PATH_IMAGE042
represents the current optimal pheromone gain;
Figure DEST_PATH_IMAGE044
and representing the weight of the individual contribution degree of the historical optimal elite.
Further, the calculation formula of the total pheromone amount is specifically as follows:
Figure DEST_PATH_IMAGE046
wherein n represents the number of ants; exp represents a non-linear decreasing function;
Figure DEST_PATH_IMAGE048
representing a hyper-parameter; k represents the kth ant;
Figure DEST_PATH_IMAGE050
indicating the increment at which the kth ant releases the pheromone on path (i, j).
Further, the pheromone increment calculation formula is specifically as follows:
Figure DEST_PATH_IMAGE052
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE054
representing the total amount of the reserved pheromone on the path after one search is finished;
Figure DEST_PATH_IMAGE056
representing the calculated task migration cost obtained by the matching scheme selected by the kth ant; 0 indicates that the kth ant has not traveled the path (i, j).
Further, the weight calculation formula of the individual contribution of the historical optimal elite is specifically as follows:
Figure DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE060
representing the last operation time of the central cloud node j;
Figure DEST_PATH_IMAGE062
representing the average run time of all the central cloud nodes.
Further, the migration process of the computing task specifically includes:
if the computing task needs to be migrated, the edge cloud sends a migration request of the computing task to the center cloud;
after receiving the migration request, the center cloud obtains an optimal computation migration decision according to the load condition of the current center cloud node and by combining the network transmission time cost from the computation task to the center cloud, and sends the optimal computation migration decision to the edge cloud where the computation task i is located;
after receiving an instruction that a computing task is allowed to migrate, the edge cloud uploads data to be processed to a specified central cloud node through a power special transmission network, and corresponding network bandwidth resources are allocated for data uploading;
and the central cloud node allocates corresponding computing resources for processing the migrated computing tasks and stores the computing results into a database.
Compared with the prior art, the invention has the following beneficial effects:
1. in an end-edge-cloud architecture of a hydropower basin water disaster comprehensive emergency command platform, a dynamic cooperation model of edge cloud computing resources is introduced, queuing delay computing tasks on the edge cloud side of a drainage basin power station are dynamically migrated to a central cloud, the advantage of sufficient central cloud resources is fully played, and the overall computing efficiency of the platform is improved from a software level;
2. according to the cloud computing resource efficiency optimization method, migration decision, bandwidth allocation and computing resource joint optimization allocation are carried out through an improved ant colony algorithm to achieve cloud computing resource efficiency optimization, through an improvement strategy that the excellent individual pheromone updating weight is decreased in a nonlinear mode and each generation of the historical optimal elite individual contributes pheromones, the load balance of a central cloud node is considered, the optimization matching effect is better, and the overall time of computing tasks is effectively reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a flow chart in an embodiment of the present invention.
FIG. 2 is a comparison graph of the synergistic effect of cloud-edge computing resources in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example (b): a hydropower domain emergency command cloud-side computing resource cooperative processing method is shown in figure 1 and comprises the following steps:
s1: acquiring sensor monitoring data acquired by a power station end, and preprocessing the sensor monitoring data at a side cloud side to form a plurality of computing tasks;
s2: acquiring the maximum allowable delay of the calculation task, and calculating to obtain the time for the calculation task to execute calculation locally; if the time for locally executing the calculation is greater than the maximum allowable delay, sending a migration request of a corresponding calculation task to the central cloud;
s3: performing ant colony algorithm-based computing task queue optimization computing on all migration requests according to the current node resource load balancing condition of the central cloud and the network transmission time cost from the edge cloud node to the central cloud node to obtain an optimal allocation strategy;
s4: and the corresponding computing task responds to the optimal allocation strategy and then carries out computing task migration so as to realize cloud-side computing resource cooperative disposal.
It should be noted that each edge cloud may migrate the computing task to a designated central cloud node, and the central cloud node allocates bandwidth and corresponding computing resources to the computing migration task. On the basis of the shared bandwidth and the computing resources, each computing task is considered as an independent execution unit.
After receiving the migration request, the central cloud generates record information for the corresponding computing migration task; the recorded information comprises final execution place information, network bandwidth resources distributed by the center cloud for corresponding computing tasks, CPU computing resources distributed by the center cloud for corresponding computing tasks, data sizes of the computing tasks and maximum allowable delay of the computing tasks; and the central cloud node synchronizes and updates the record information of the computation migration tasks under the central cloud, and after making a computation migration decision each time, the record information is synchronously changed and broadcasted to all nodes of the central cloud, and the records of all the computation migration tasks are integrated into a task record set.
Defining an edge cloud computing task i and a center cloud node j, wherein i belongs to {1, 2, …, N }, and j belongs to {0, 1, 2, …, M }.
Recording information TijThe expression of (a) is:
Figure DEST_PATH_IMAGE064
in the formula, sijRepresenting whether the computing task i is finally executed locally in the edge cloud or migrated to the center cloud for execution; bwijRepresenting network bandwidth resources allocated by the central cloud for the computing task; compijRepresenting CPU computing resources distributed by the central cloud for the computing task; viRepresenting the data size of the calculation task i; t isi tolerRepresenting the maximum allowable delay for computing task i.
The expression of the task record set is:
Figure DEST_PATH_IMAGE066
when s isijIf the execution time is not less than 0, the local computing execution time of the computing task is less than the maximum allowable delay of the task, and the cloud-edge computing resource cooperation model selects to execute the task locally; when s isijWhen =1, the local computation execution time of the computation task is greater than the maximum allowable delay of the task, and the cloudAnd the edge computing resource collaborative model selects to dynamically distribute the computing task to a certain node of the cloud center for operation.
The optimization calculation process of the optimal allocation strategy specifically comprises the following steps:
s301: calculating according to network bandwidth resources distributed to the computing tasks by the central cloud nodes to obtain migration transmission delay of migration of the corresponding computing tasks to the corresponding central cloud nodes;
s302: calculating according to the CPU resources distributed to the computing tasks by the central cloud nodes to obtain migration computing time of the corresponding computing tasks for executing computing on the corresponding central cloud nodes;
s303: calculating to obtain the total migration delay of the migration of the corresponding calculation task according to the sum of the migration transmission delay and the migration calculation time;
s304: and (4) considering the load balancing condition of each node of the central cloud, and performing optimized computation on the cloud-side computing resource efficiency by minimizing the execution completion time of all computing tasks to obtain the optimal allocation strategy of the cloud-side computing resources.
The local computation execution time of the computation task i is only related to the processing power of the local CPU. Thus computing the local computation execution time of task i
Figure DEST_PATH_IMAGE068
Comprises the following steps:
Figure DEST_PATH_IMAGE070
migration transmission delay for migration of computing task i to central cloud node j
Figure DEST_PATH_IMAGE072
Comprises the following steps:
Figure DEST_PATH_IMAGE074
migration computing time for computing task i to perform computing at central cloud node j is as follows:
Figure DEST_PATH_IMAGE076
the optimal calculation formula of the optimal allocation strategy is specifically as follows:
Figure 205224DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 176591DEST_PATH_IMAGE004
the total migration delay representing task migration is minimum; n represents the number of ants; m represents the number of central cloud resources;
Figure 200173DEST_PATH_IMAGE006
and representing the total migration delay of the computing task i to the central cloud node j. Total migration delay of computing task i to central cloud node j
Figure 866778DEST_PATH_IMAGE006
The sum of the migration computation time and the migration transmission delay is calculated.
The ant colony algorithm is inspired by the foraging behavior process of ants in nature, has the characteristics of heuristic, positive feedback and distributed search, and is generally applied to the optimization solving process of a plurality of NP problems. The ants leave pheromones on the paths where the ants find the food to pass, and later ants select proper paths according to the concentration of the pheromones left by the ants on the paths, namely, each ant is transferred from one resource match to the next resource match according to a certain probability. On the path that food is sought to pass, ants leave pheromones, and later ants can leave pheromone concentration according to the ants that pass on the path to select proper paths, namely, each ant can select a node which is transferred from one node in the side cloud to a certain node in the center cloud according to a certain probability. When the optimal allocation strategy is optimally calculated, resource matching is carried out by using the migration probability, and the calculation formula of the migration probability is specifically as follows:
Figure DEST_PATH_IMAGE077
wherein the content of the first and second substances,
Figure 70226DEST_PATH_IMAGE010
representing the probability of migration of the computing task i to the central cloud node j;
Figure 364941DEST_PATH_IMAGE012
represents the pheromone concentration on pathway (i, j);
Figure 822074DEST_PATH_IMAGE014
represents the pheromone concentration on pathway (i, q);
Figure 710395DEST_PATH_IMAGE016
representing the sensitivity of ants to pheromones;
Figure 779851DEST_PATH_IMAGE018
representing the attraction level of the central cloud node j to ants on the computing task i, wherein the lower the migration cost is, the larger the value is;
Figure 70018DEST_PATH_IMAGE020
representing the attraction level of the node q to ants on the calculation task i;
Figure 700982DEST_PATH_IMAGE022
representing the sensitivity of the ant colony to pheromones;
Figure 76600DEST_PATH_IMAGE024
representing nodes that are not visited;
Figure 684168DEST_PATH_IMAGE026
indicating the visited node.
In the process of continuously searching for the optimal matching, initial pheromones left on the path by the ants are continuously distributed, new pheromones are continuously released on the path, and the optimal allocation strategy is optimized and calculated. When the optimal matching of the cloud edge tasks is found, the load balance of the central cloud node is considered, so the pheromone updating formula on the path is specifically as follows:
Figure 94420DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 893356DEST_PATH_IMAGE030
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t +1 is represented;
Figure 756270DEST_PATH_IMAGE032
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t is represented;
Figure 167528DEST_PATH_IMAGE034
expressing the volatilization coefficient of pheromone, and taking the value as
Figure 901129DEST_PATH_IMAGE036
Figure 873895DEST_PATH_IMAGE038
Represents the total amount of pheromone released by all ants on path (i, j);
Figure 224105DEST_PATH_IMAGE040
representing the load condition of the corresponding central cloud node;
Figure 439055DEST_PATH_IMAGE042
represents the current optimal pheromone gain;
Figure 558320DEST_PATH_IMAGE044
and representing the weight of the individual contribution degree of the historical optimal elite.
In order to improve the optimal result of the traditional ant algorithm, research work provides an improvement strategy that the update weight of excellent individual pheromones is reduced in a nonlinear way and historical optimal elite individuals contribute the pheromones each generation.
The calculation formula of the total pheromone amount is specifically as follows:
Figure 699058DEST_PATH_IMAGE046
wherein n represents the number of ants; exp represents a non-linear decreasing function;
Figure 536564DEST_PATH_IMAGE048
representing a hyper-parameter; k represents the kth ant;
Figure 289626DEST_PATH_IMAGE050
indicating the increment at which the kth ant releases the pheromone on path (i, j).
The increment calculation formula of the pheromone is specifically as follows:
Figure 528977DEST_PATH_IMAGE052
wherein the content of the first and second substances,
Figure 577967DEST_PATH_IMAGE054
representing the total amount of the reserved pheromone on the path after one search is finished;
Figure 902769DEST_PATH_IMAGE056
representing the calculated task migration cost obtained by the matching scheme selected by the kth ant; 0 indicates that the kth ant has not traveled the path (i, j).
The weight calculation formula of the individual contribution degree of the historical optimal elite is specifically as follows:
Figure 459521DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure 22221DEST_PATH_IMAGE060
representing the last operation time of the central cloud node j;
Figure 770340DEST_PATH_IMAGE062
is shown in allAverage running time of the heart cloud nodes. When the load of the cloud center node j is high,
Figure DEST_PATH_IMAGE079
at the same time of becoming smaller
Figure 238231DEST_PATH_IMAGE012
The probability of migration decreases as well.
In step S4, the migration process of the calculation task specifically includes:
s401: if the computing task needs to be migrated, the edge cloud sends a migration request of the computing task to the center cloud;
s402: after receiving the migration request, the center cloud obtains an optimal computation migration decision according to the load condition of the current center cloud node and by combining the network transmission time cost from the computation task to the center cloud, and sends the optimal computation migration decision to the edge cloud where the computation task i is located;
s403: after receiving an instruction that a computing task is allowed to migrate, the edge cloud uploads data to be processed to a specified central cloud node through a power special transmission network, and corresponding network bandwidth resources are allocated for data uploading;
s404: and the central cloud node allocates corresponding computing resources for processing the migrated computing tasks and stores the computing results into a database.
Now, taking a water disaster comprehensive emergency command platform of a river basin as an example for explanation, the model provided by the invention can generate an optimal calculation task migration strategy within 2S. As shown in fig. 2, after testing, the model still performs well when the number of tasks exceeds 200, and is more stable than optimization algorithms such as neural networks and reinforcement learning, and the overall time of the calculation tasks is reduced by 60%.
The working principle is as follows: in an end-edge-cloud architecture of a hydropower basin water disaster comprehensive emergency command platform, a dynamic cooperation model of edge cloud computing resources is introduced, queuing delay computing tasks on the edge cloud side of a drainage basin power station are dynamically migrated to a central cloud, the advantage of sufficient central cloud resources is fully played, and the overall computing efficiency of the platform is improved from a software level; according to the cloud computing resource efficiency optimization method, migration decision, bandwidth allocation and computing resource joint optimization allocation are carried out through an improved ant colony algorithm to achieve cloud computing resource efficiency optimization, through an improvement strategy that the excellent individual pheromone updating weight is decreased in a nonlinear mode and each generation of the historical optimal elite individual contributes pheromones, the load balance of a central cloud node is considered, the optimization matching effect is better, and the overall time of computing tasks is effectively reduced.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A hydropower basin emergency command cloud-side computing resource cooperative processing method is characterized by comprising the following steps:
acquiring sensor monitoring data acquired by a power station end, and preprocessing the sensor monitoring data at a side cloud side to form at least one computing task;
acquiring the maximum allowable delay of the calculation task, and calculating to obtain the time for the calculation task to execute calculation locally; if the time for locally executing the calculation is greater than the maximum allowable delay, sending a migration request of a corresponding calculation task to the central cloud;
performing ant colony algorithm-based computing task queue optimization computing on all migration requests according to the current node resource load balancing condition of the central cloud and the network transmission time cost from the edge cloud node to the central cloud node to obtain an optimal allocation strategy;
and the corresponding computing task responds to the optimal allocation strategy and then carries out computing task migration so as to realize cloud-side computing resource cooperative disposal.
2. The method for cooperative processing of computing resources at the cloud side for emergency command of a hydropower domain according to claim 1, wherein the central cloud generates record information for a corresponding computing migration task after receiving a migration request;
the recorded information comprises final execution place information, network bandwidth resources distributed by the center cloud for corresponding computing tasks, CPU computing resources distributed by the center cloud for corresponding computing tasks, data sizes of the computing tasks and maximum allowable delay of the computing tasks;
and the central cloud node synchronizes and updates the record information of the computation migration tasks under the central cloud, and after making a computation migration decision each time, the record information is synchronously changed and broadcasted to all nodes of the central cloud, and the records of all the computation migration tasks are integrated into a task record set.
3. The method for cooperative processing of cloud-side computing resources for emergency command of hydropower domains as claimed in claim 1, wherein the optimal computing process of the optimal allocation strategy specifically comprises:
calculating according to network bandwidth resources distributed to the computing tasks by the central cloud nodes to obtain migration transmission delay of migration of the corresponding computing tasks to the corresponding central cloud nodes;
calculating according to the CPU resources distributed to the computing tasks by the central cloud nodes to obtain migration computing time of the corresponding computing tasks for executing computing on the corresponding central cloud nodes;
calculating to obtain the total migration delay of the migration of the corresponding calculation task according to the sum of the migration transmission delay and the migration calculation time;
and (4) considering the load balancing condition of each node of the central cloud, and performing optimized computation on the cloud-side computing resource efficiency by minimizing the execution completion time of all computing tasks to obtain the optimal allocation strategy of the cloud-side computing resources.
4. The method for cooperative processing of cloud-side computing resources for emergency commands of hydropower domains as claimed in claim 1, wherein the optimal calculation formula of the optimal allocation strategy is specifically as follows:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 470547DEST_PATH_IMAGE002
the total migration delay representing task migration is minimum; n represents the number of ants; m represents the number of central cloud resources;
Figure DEST_PATH_IMAGE003
and representing the total migration delay of the computing task i to the central cloud node j.
5. The method for cooperative processing of cloud-side computing resources for emergency commands of hydropower domains as claimed in claim 1, wherein the optimal allocation strategy is optimized and calculated by resource matching according to migration probability, and the calculation formula of the migration probability is specifically as follows:
Figure 246742DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
representing the probability of migration of the computing task i to the central cloud node j;
Figure 649648DEST_PATH_IMAGE006
represents the pheromone concentration on pathway (i, j);
Figure DEST_PATH_IMAGE007
represents the pheromone concentration on pathway (i, q);
Figure 903912DEST_PATH_IMAGE008
representing the sensitivity of ants to pheromones;
Figure DEST_PATH_IMAGE009
representing the attraction level of the central cloud node j to ants on the computing task i, the lower the migration cost,the larger the value;
Figure 78803DEST_PATH_IMAGE010
representing the attraction level of the node q to ants on the calculation task i;
Figure DEST_PATH_IMAGE011
representing the sensitivity of the ant colony to pheromones;
Figure 291479DEST_PATH_IMAGE012
representing nodes that are not visited;
Figure DEST_PATH_IMAGE013
indicating the visited node.
6. The method for cooperative processing of cloud-side computing resources for emergency commands of hydropower domains as claimed in claim 5, wherein when the optimal allocation strategy is optimized, an pheromone updating formula on a path is specifically as follows:
Figure 181681DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE015
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t +1 is represented;
Figure 505215DEST_PATH_IMAGE016
the pheromone concentration of the computing task i transferred to the central cloud node j at the moment t is represented;
Figure DEST_PATH_IMAGE017
expressing the volatilization coefficient of pheromone, and taking the value as
Figure 800193DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
Represents the total amount of pheromone released by all ants on path (i, j);
Figure 918190DEST_PATH_IMAGE020
representing the load condition of the corresponding central cloud node;
Figure DEST_PATH_IMAGE021
represents the current optimal pheromone gain;
Figure 295688DEST_PATH_IMAGE022
and representing the weight of the individual contribution degree of the historical optimal elite.
7. The method for the cooperative processing of the cloud-side computing resources for the emergency commands of the hydropower domain according to claim 6, wherein the computing formula of the total amount of the pheromones is as follows:
Figure DEST_PATH_IMAGE023
wherein n represents the number of ants; exp represents a non-linear decreasing function;
Figure 157334DEST_PATH_IMAGE024
representing a hyper-parameter; k represents the kth ant;
Figure DEST_PATH_IMAGE025
indicating the increment at which the kth ant releases the pheromone on path (i, j).
8. The method for cooperative processing of cloud-side computing resources for emergency commands of hydropower domains as claimed in claim 7, wherein the incremental computing formula of the pheromone is as follows:
Figure 41239DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE027
representing the total amount of the reserved pheromone on the path after one search is finished;
Figure 595717DEST_PATH_IMAGE028
representing the calculated task migration cost obtained by the matching scheme selected by the kth ant; 0 indicates that the kth ant has not traveled the path (i, j).
9. The method for the cooperative processing of the cloud-side computing resources for the emergency commands of the hydropower domain according to claim 6, wherein a weight calculation formula of the individual contribution degree of the historical optimal elite is as follows:
Figure DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 460511DEST_PATH_IMAGE030
representing the last operation time of the central cloud node j;
Figure DEST_PATH_IMAGE031
representing the average run time of all the central cloud nodes.
10. The method for cooperative processing of the cloud-side computing resources for emergency command of the hydropower domain as claimed in any one of claims 1 to 9, wherein the migration process of the computing task is specifically as follows:
if the computing task needs to be migrated, the edge cloud sends a migration request of the computing task to the center cloud;
after receiving the migration request, the center cloud obtains an optimal computation migration decision according to the load condition of the current center cloud node and by combining the network transmission time cost from the computation task to the center cloud, and sends the optimal computation migration decision to the edge cloud where the computation task i is located;
after receiving an instruction that a computing task is allowed to migrate, the edge cloud uploads data to be processed to a specified central cloud node through a power special transmission network, and corresponding network bandwidth resources are allocated for data uploading;
and the central cloud node allocates corresponding computing resources for processing the migrated computing tasks and stores the computing results into a database.
CN202110894257.6A 2021-08-05 2021-08-05 Water and power basin emergency command cloud-side computing resource cooperative processing method Active CN113342510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110894257.6A CN113342510B (en) 2021-08-05 2021-08-05 Water and power basin emergency command cloud-side computing resource cooperative processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110894257.6A CN113342510B (en) 2021-08-05 2021-08-05 Water and power basin emergency command cloud-side computing resource cooperative processing method

Publications (2)

Publication Number Publication Date
CN113342510A true CN113342510A (en) 2021-09-03
CN113342510B CN113342510B (en) 2021-11-02

Family

ID=77480810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110894257.6A Active CN113342510B (en) 2021-08-05 2021-08-05 Water and power basin emergency command cloud-side computing resource cooperative processing method

Country Status (1)

Country Link
CN (1) CN113342510B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114936808A (en) * 2022-07-21 2022-08-23 之江实验室 Cloud-edge cooperative task management system and method for substation fault detection
CN115277789A (en) * 2022-08-26 2022-11-01 中国长江三峡集团有限公司 Safety protection system and method for cascade hydropower station
CN115587222A (en) * 2022-12-12 2023-01-10 阿里巴巴(中国)有限公司 Distributed graph calculation method, system and equipment
CN117170885A (en) * 2023-11-03 2023-12-05 国网山东综合能源服务有限公司 Distributed resource optimization allocation method and system based on cloud edge cooperation
WO2024095745A1 (en) * 2022-11-04 2024-05-10 ソニーグループ株式会社 Information processing device, and information processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932422A (en) * 2012-09-29 2013-02-13 南京邮电大学 Cloud environment task scheduling method based on improved ant colony algorithm
CN103176850A (en) * 2013-04-10 2013-06-26 国家电网公司 Electric system network cluster task allocation method based on load balancing
CN104618406A (en) * 2013-11-05 2015-05-13 镇江华扬信息科技有限公司 Load balancing algorithm based on naive Bayesian classification
CN106936892A (en) * 2017-01-09 2017-07-07 北京邮电大学 A kind of self-organizing cloud multi-to-multi computation migration method and system
US20170286180A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Joint Network and Task Scheduling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932422A (en) * 2012-09-29 2013-02-13 南京邮电大学 Cloud environment task scheduling method based on improved ant colony algorithm
CN103176850A (en) * 2013-04-10 2013-06-26 国家电网公司 Electric system network cluster task allocation method based on load balancing
CN104618406A (en) * 2013-11-05 2015-05-13 镇江华扬信息科技有限公司 Load balancing algorithm based on naive Bayesian classification
US20170286180A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Joint Network and Task Scheduling
CN106936892A (en) * 2017-01-09 2017-07-07 北京邮电大学 A kind of self-organizing cloud multi-to-multi computation migration method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YOUNGJU MOON等: "A slave ants based ant colony optimization algorithm for task scheduling in cloud computing environments", 《HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES VOLUME》 *
王俊英 等: "基于概率自适应蚁群算法的云任务调度方法", 《郑州大学学报(工学版)》 *
王骞: "基于移动Agent的服务器集群研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114936808A (en) * 2022-07-21 2022-08-23 之江实验室 Cloud-edge cooperative task management system and method for substation fault detection
CN115277789A (en) * 2022-08-26 2022-11-01 中国长江三峡集团有限公司 Safety protection system and method for cascade hydropower station
CN115277789B (en) * 2022-08-26 2024-03-26 中国长江三峡集团有限公司 Safety protection system and method for cascade hydropower station
WO2024095745A1 (en) * 2022-11-04 2024-05-10 ソニーグループ株式会社 Information processing device, and information processing method
CN115587222A (en) * 2022-12-12 2023-01-10 阿里巴巴(中国)有限公司 Distributed graph calculation method, system and equipment
CN115587222B (en) * 2022-12-12 2023-03-17 阿里巴巴(中国)有限公司 Distributed graph calculation method, system and equipment
CN117170885A (en) * 2023-11-03 2023-12-05 国网山东综合能源服务有限公司 Distributed resource optimization allocation method and system based on cloud edge cooperation
CN117170885B (en) * 2023-11-03 2024-01-26 国网山东综合能源服务有限公司 Distributed resource optimization allocation method and system based on cloud edge cooperation

Also Published As

Publication number Publication date
CN113342510B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113342510B (en) Water and power basin emergency command cloud-side computing resource cooperative processing method
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
CN105550323B (en) Load balance prediction method and prediction analyzer for distributed database
US8745434B2 (en) Platform for continuous mobile-cloud services
Das et al. Parallel processing of adaptive meshes with load balancing
TWI725744B (en) Method for establishing system resource prediction and resource management model through multi-layer correlations
CN104618153B (en) Dynamic fault-tolerant method and system based on P2P in the processing of distributed parallel figure
CN108170530B (en) Hadoop load balancing task scheduling method based on mixed element heuristic algorithm
CN104216782A (en) Dynamic resource management method for high-performance computing and cloud computing hybrid environment
CN110858973A (en) Method and device for predicting network traffic of cell
CN111309393A (en) Cloud edge-side collaborative application unloading algorithm
Filip et al. Data capsule: Representation of heterogeneous data in cloud-edge computing
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
Huang et al. Enabling dnn acceleration with data and model parallelization over ubiquitous end devices
CN109976873B (en) Scheduling scheme obtaining method and scheduling method of containerized distributed computing framework
CN111506431A (en) Method for optimizing perception load performance of cloud server under energy consumption constraint
Patil et al. Memory and Resource Management for Mobile Platform in High Performance Computation Using Deep Learning
CN107301094A (en) The dynamic self-adapting data model inquired about towards extensive dynamic transaction
Ali et al. Probabilistic normed load monitoring in large scale distributed systems using mobile agents
CN116431281A (en) Virtual machine migration method based on whale optimization algorithm
CN115913967A (en) Micro-service elastic scaling method based on resource demand prediction in cloud environment
Shi et al. Hierarchical adaptive collaborative learning: A distributed learning framework for customized cloud services in 6G mobile systems
Zhao et al. A dynamic dispatching method of resource based on particle swarm optimization for cloud computing environment
Das et al. MinEX: a latency-tolerant dynamic partitioner for grid computing applications
CN112988904A (en) Distributed data management system and data storage method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant