CN113282368A - Edge computing resource scheduling method for substation inspection - Google Patents
Edge computing resource scheduling method for substation inspection Download PDFInfo
- Publication number
- CN113282368A CN113282368A CN202110569247.5A CN202110569247A CN113282368A CN 113282368 A CN113282368 A CN 113282368A CN 202110569247 A CN202110569247 A CN 202110569247A CN 113282368 A CN113282368 A CN 113282368A
- Authority
- CN
- China
- Prior art keywords
- resource
- edge
- scheduling
- resources
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000013468 resource allocation Methods 0.000 claims abstract description 38
- 230000007246 mechanism Effects 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000013508 migration Methods 0.000 claims abstract description 10
- 230000005012 migration Effects 0.000 claims abstract description 10
- 238000013507 mapping Methods 0.000 claims description 34
- 230000005284 excitation Effects 0.000 claims description 22
- 238000012544 monitoring process Methods 0.000 claims description 19
- 230000002787 reinforcement Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 12
- 238000000354 decomposition reaction Methods 0.000 claims description 10
- 230000002776 aggregation Effects 0.000 claims description 9
- 238000004220 aggregation Methods 0.000 claims description 9
- 230000006399 behavior Effects 0.000 claims description 9
- 238000009826 distribution Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 8
- 230000009471 action Effects 0.000 claims description 7
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 230000008901 benefit Effects 0.000 claims description 7
- 238000013475 authorization Methods 0.000 claims description 6
- 230000000903 blocking effect Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000007726 management method Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000006854 communication Effects 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 239000006185 dispersion Substances 0.000 claims description 4
- 230000003993 interaction Effects 0.000 claims description 4
- 238000006116 polymerization reaction Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 238000005538 encapsulation Methods 0.000 claims description 3
- 230000008520 organization Effects 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 6
- 238000003860 storage Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 14
- 238000010276 construction Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009133 cooperative interaction Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 238000007596 consolidation process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000414 obstructive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011273 social behavior Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention provides an edge computing resource scheduling method for substation inspection, which comprises the following steps: step a), modeling edge calculation and cloud collaborative representation; step b), under a cloud/edge collaborative representation model, uniformly virtualizing, serving and abstracting various resources at the edge end; step c), establishing a dynamic automatic allocation mechanism of the cloud edge resources according to requirements based on abstract services of various resources at the edge end; step d), when the cloud side resource allocation exceeds the requirement, establishing the computing resource sharing of the transformer substation inspection robot; and e), establishing a real-time migration mechanism of the cloud side resources so as to efficiently respond to the cloud/side resource sharing request. The invention designs a resource representation, service abstraction, allocation as required, resource sharing and migration mechanism based on a mobile edge, and efficiently schedules computing resources and storage resources in edge nodes in an unmanned substation patrol system based on an edge computing architecture, so as to form an edge computing operation system suitable for an unmanned substation patrol application scene.
Description
Technical Field
The invention relates to the technical field of power informatization, edge calculation and system optimization, in particular to an edge calculation resource scheduling method for substation inspection.
Background
In recent years, an operation and inspection digital technology based on an intelligent patrol system has become a core technology commonly adopted in daily operation and maintenance of a transformer substation. The intelligent tour realizes the automatic comprehensive study and judgment of the substation total station equipment, replaces the daily work of operation and maintenance personnel by a wide margin, solves the problem of shortage of the operation and maintenance personnel, and improves the unattended operation of the substation to a new height. A general intelligent inspection system adopts a mode of combining a track robot, a miniaturized robot and a fixed point camera, carries equipment such as a thermal infrared imager and a visible light camera, integrates a non-contact monitoring technology, a multi-sensor fusion technology, a mode recognition technology and the like of electric equipment, and monitors running equipment in an all-round mode.
According to the traditional substation patrol system, collected data are transmitted to a patrol host, and corresponding control and instructions are made after the patrol host is uniformly processed. With the increase of sensor data, the transmission capacity borne by the data link is large, and meanwhile, the tasks of the host for carrying out data intensive processing are also large, so that feedback cannot be made in time. In the application of station voltage, the types of equipment are complex, the equipment grade coverage is comprehensive, and the requirement on stable and safe operation of the equipment is very high, so that the traditional cloud host mode faces more and more problems and cannot meet the new requirement of future power grid development.
In an unmanned intelligent inspection system of a transformer substation, how to timely process front-end sensor data is a key problem facing currently. Because the number of sensors on the inspection robot and the real-time data capable of being received are increasing, the communication traffic in the network is in a rapid growth trend, and the number of calculation-intensive applications of the terminal is also increasing. Therefore, the edge computing technology and the system optimization technology are adopted, the computing and storage capacity is transferred, the front-end sensing equipment is subjected to distributed data processing on site and then the processing result is uploaded, the centralized processing amount of the platform is reduced, the response speed of the platform is improved, and the network performance and the capacity of the application program can be greatly improved.
However, since the edge-side device is resource-limited, it has new challenges in resource consolidation management and efficient processing. The invention provides an edge computing resource scheduling method for substation inspection, which mainly designs an edge computing resource representation, service abstraction, allocation as required, resource sharing and migration mechanism based on a substation inspection robot so as to minimize network flow and delay overhead.
Disclosure of Invention
The invention aims to provide an edge computing resource scheduling method for substation patrol, which is used for efficiently scheduling computing resources and storage resources in edge nodes in an unmanned substation patrol system based on an edge computing architecture to form an edge computing operation system suitable for an unmanned substation patrol application scene.
An edge computing resource scheduling method for substation inspection comprises the following steps:
step a), modeling edge calculation and cloud collaborative representation: the cloud edge collaborative relationships are classified into the following 4 types: the transformer substation inspection robot is cooperated with the edge server; the transformer substation inspection robot is autonomously cooperated in a P2P mode; the transformer substation inspection robot directly requests the resources of the inspection cloud center service; the transformer substation inspection robots in the covered area are coordinated in the adjacent area through interconnection of edge servers, and a certain task related to a real-time position is completed together; adopting a computing unit topological representation to construct a cloud/edge collaborative representation model;
step b), under a cloud/edge collaborative representation model, uniformly virtualizing, serving and abstracting various resources at the edge end, and providing a uniform interface for upper-layer application by constructing a diversified resource management system, so as to shield the heterogeneity of actual equipment and a system;
step c), establishing a dynamic automatic allocation mechanism of the cloud edge resources according to requirements based on abstract services of various resources at the edge end;
step d), when the cloud-edge resource allocation exceeds the demand, establishing the computing resource sharing of the substation inspection robot, improving the utilization rate of network edge resources, and specifically, converting the optimal resource utilization rate problem into the highest excitation problem through a modeling excitation mechanism;
and e), realizing a cloud/edge migration algorithm based on deep reinforcement learning, and establishing a real-time migration mechanism of cloud edge resources by adopting a reinforcement learning scheduling method combined with a deep neural network and a Monte Carlo Tree Search (MCTS) method so as to efficiently respond to a cloud/edge resource sharing request.
Further, the step a) comprises the following specific steps:
step a1), topologically representing that the inspection robot computing units are used as vertexes, the inclusion relationship between unit pairs is weight, the coupling degree between the units is computed, the cooperative relationship is depicted in a form of a weighted graph, and the association strength between the computing units are simply and visually represented;
step a2), the inspection robot computing unit ecu is modeled by public attributes and trust attributes:
ecu=(cp,tr) (1)
wherein cp represents the common attribute of the ecus, and all the calculation elements have attribute description;
step a3), calculating the trust degree tr of the unit ecu;
the trust degree tr is modeled by an identity, behavior and capability 3 part, and is represented as tr ═ t, bt, ct, it is the identity trust degree possessed by the ecus, indicates the identity validity determined by the identity reliability guarantee mechanism of the ecus about identity authentication, authorization and authorization delegation, and takes a binary value (0, 1), bt is the behavior trust degree, indicates the organization convention to be followed by the ecus in different complexes in the edge computing environment of the inspection robot, and includes the specific behavior constraint of the edge computing environment of the inspection robot on the edge computing unit and the interaction specification to be followed in the autonomous coordination process of the edge computing elements.
Further, the step b) comprises the following specific steps:
step b1), virtualizing the calculation node mapping of the transformer substation inspection robot, the inspection communication link mapping and the coordination mapping of the inspection nodes and the virtual links;
step b2), performing abstract aggregation on resources in the physical layer at the bottom layer of the patrol node to form a virtual resource layer;
step b3), performing service encapsulation on the resources of the substation patrol system: the method comprises the steps of mapping the attribute and the function of the resource in a cloud side architecture service resource pool, using the service as an abstract representation of the attribute and the function of the resource, and establishing an association relationship between the resource and the service through three modes, namely top decomposition, bottom aggregation and intermediate dispersion.
Further, the establishing of the association relationship between the resource and the service in the step b3) through three ways, i.e. top-layer decomposition, bottom-layer aggregation and intermediate dispersion, is specifically as follows:
the top layer decomposition mode: starting from a task, decomposing the task into a plurality of subtasks according to sequence, parallel, circulation and task selection processes, and performing step by step until each subtask is a minimum unit with task meaning, and then associating the subtask with a service to enable resources to be virtually packaged into the service meeting the subtask;
mode of bottom layer polymerization: virtualizing and packaging a large number of transformer substation inspection system resources at the bottom layer into various services from bottom to top, and gradually abstracting the services meeting the task requirements by using a clustering abstraction or other virtualization operation modes;
intermediate divergence: the method has the advantages that the method is diverged from the middle to the bottom layer and the top layer, two modes of decomposition of the top layer and aggregation of the bottom layer are combined, the task requirements and the actual capacity of resources can be considered, and the services packaged in the virtual resource are directly related to the tasks.
Further, the step c) comprises the following specific steps:
step c1), the resource request processor of the patrol system of the transformer substation receives and responds to the user requirements, distributes the requirements to each resource allocation control system, finds, matches and returns the required resources within the control field range, and then the resource allocation control system submits the required resources to the resource request processor, and finally the required resources are sent to the user;
step c2), the resource allocation control system of the substation patrol system mainly comprises a monitoring module, a prediction module and a resource allocation module, wherein the monitoring module is mainly responsible for calculating running state monitoring and collecting resource use state data; the prediction module predicts the calculation resource load in the next time period by using the data collected by the monitoring module; the resource allocation module comprehensively analyzes the current resource load value acquired from the monitoring module and the resource load value in the next time period acquired by the prediction module, and the resource allocation module adopts a resource allocation strategy based on hybrid elastic control by using the current and predicted calculation resource demand, implements an adaptive elastic resource allocation method combining active control and passive reaction, and realizes effective utilization of resources;
step c3), predicting the resource load of the substation patrol system to obtain a load predicted value;
step c4), carrying out self-adaptive flexible configuration of substation patrol system resources based on demand prediction: and after receiving the current load information provided by the monitoring module and the resource demand change information of the next time period provided by the prediction module, the resource allocation module integrates the information of the current load information and the resource demand change information to perform adaptive elastic resource allocation based on the combination of active control and passive reaction.
Further, the step d) comprises the following specific steps:
step d1), establishing a utility function, and quantitatively analyzing the performance of each edge node;
step d2), establishing a fair distribution incentive strategy: the substation inspection robots obtain excitation by sharing the remaining resources, and let r ═ r (r)1,…rv) Assign vectors to the excitation, each element rkFor the distribution ratio value which can be obtained by the current node k from the total excitation value of the whole edge cloud l, if the sum of the excitations of all the edge nodes is equal to the maximum value of the excitation of the edge cloud, r is called efficient excitation distribution;
step d3), constructing a distributed fringe cloud based on the league game: finding a potential blocking federation l through an edge cloud manager FCM, wherein for each edge node k in the potential blocking federation l stays in the same federation with a probability of 1-rho, and selecting another federation with a probability of rho.
Further, the step e) comprises the following specific steps:
step e1), constructing a strategy gradient reinforcement learning method based on MCTS;
step e2), designing a scheduling strategy network based on the seq2seq model;
step e3), based on DAG graph layering, reducing the length of the task sequence to be scheduled input into the strategy network each time;
and e4), realizing the input and output of the dispatching algorithm of the substation patrol system.
Further, step e1) is specifically:
assuming that the stochastic scheduling policy (stochastic policy) is denoted as π (S | G; θ), the scheduling policy network model is denoted as fθ(G) The probability vector of the scheduling policy network model for predicting the output scheduling action is represented as a, i.e. a ═ fθ(G) Using the current fθThe output prediction scheduling action, MCTS method searches a plurality of task scheduling sequence samples to obtain new strategy probability pi (S | G; theta), then, based on the random gradient descending method, the scheduling strategy network model parameter theta is updated, so that the updated fθ(G) The output of (a) is closer to the probability of the new scheduling strategy pi (S | G; theta) obtained by the MCTS method, i.e. the following loss function is optimized, wherein c denotes that the L2 regularization parameter prevents overfitting:
l=-πTloga+c||θ||2 (1)。
further, step e2) is specifically:
the scheduling strategy is predicted by the strategy network according to input information, the input information comprises hardware resource states in the system and information of tasks to be scheduled, the output scheduling strategy is predicted, namely the mapping relation of task scheduling to computing nodes, the scheduling strategy network predicts the scheduling strategy of the DAG tasks based on the information provided by the hardware resource topological graph and the DAG graph, the scheduling strategy is executed in the actual heterogeneous computing system through the scheduling strategy, reward feedback of operation completion time is obtained, and then parameters of the strategy network are updated by a strategy gradient reinforcement learning method based on MCTS, so that the task scheduling expectation reward output by the next scheduling strategy network prediction is improved.
Further, step e4) is specifically:
the input and output of the strategy network comprise an encoder RNN and a decoder RNN, the input information of the encoder RN comprises a hardware resource topological state sequence and a DAG task sequence to be scheduled, the connection information is used as the input of a network model in an embedding mode, the embedding of each heterogeneous computing node comprises resource state information such as computing capacity type, memory capacity, memory and network bandwidth size and the like, and topological relation information of adjacent nodes, and the embedding of each task comprises task type, data transmission size and information of adjacent tasks; the decoder is a long-time memory unit based on an attention mechanism, the output sequence length is equal to the length of a DAG task sequence to be scheduled, the decoder outputs the mapping relation of the current task scheduling to the computing node each time, and the output scheduling mapping relation is combined with the embedding of the scheduling node and serves as input information of the next decoding.
The invention provides an edge computing resource scheduling method for transformer substation inspection aiming at factors such as real-time collaboration, large-scale computing and strong real-time required by transformer substation inspection service application, cloud center resources of a transformer substation inspection system and various inspection edge nodes are subjected to unified virtual collaborative management, and meanwhile, the real-time advantages of computing of a cloud center and the real-time advantages of the edge nodes are brought into play and are respectively used for meeting large-scale computing requirements and real-time requirements of transformer substation safety guarantee; a large-scale substation patrol safety model is built in the cloud center and distributed to each edge end to guide safety patrol in real time, efficient cloud-edge cooperation is achieved, and safety guarantee of the substation is improved.
Drawings
FIG. 1 is a schematic diagram of virtual resources of a substation patrol system based on edge computing according to the present invention;
FIG. 2 shows a flexible configuration control system for the resource self-adaptation of the substation patrol system of the present invention;
FIG. 3 is a diagram of a strategy network structure based on seq2seq model;
FIG. 4 is a diagram of the present invention based on a DAG graph for a hierarchical schedulable task sequence.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In order to implement the above technical solution, the present invention needs to solve the following technical problems: the problem how to uniformly express cloud-side cooperative resources of the substation inspection system is solved; the problem of how to perform service abstraction on resources with various resource types, large difference, inconsistent granularity and strong isomerism; how to carry out optimal configuration on cloud-edge resources so as to automatically meet the dynamic diversity of resource requirements and the elasticity problem in the resource configuration process; the problem of how to optimally share edge computing resources; and how to migrate the cloud edge resources in real time.
Aiming at the problems of various resource types, large difference, inconsistent resource granularity and strong heterogeneity of a transformer substation patrol system under a novel system architecture, the invention designs a virtualization technology and a multi-angle fine-grained service abstraction technology based on multi-dimensional mapping, and rapidly deploys corresponding entity resources and a cooperative architecture; aiming at the optimal allocation of cloud-edge resources, the dynamic diversity of resource requirements and the elasticity problem in the resource allocation process, a resource dynamic on-demand automatic allocation technology with self-adaptive elasticity capability is designed. Aiming at the sharing problem of the edge computing resources, a resource sharing mechanism based on a distributed alliance game is designed, the resource sharing problem is defined as an optimization problem and is cooperatively solved in a distributed mode; aiming at the problem that cloud edge resources need to be migrated in real time, a scheduling algorithm based on combination of deep learning and reinforcement learning is designed. On the basis, an edge operating system suitable for intelligent substation inspection is further established to accept high-precision and low-delay substation unmanned inspection upper-layer application.
The embodiment of the invention provides an edge computing resource scheduling method for substation inspection, which comprises the following steps:
step a), modeling edge calculation and cloud collaborative representation. The cloud edge collaborative relationships are classified into the following 4 types: the transformer substation inspection robot is cooperated with the edge server; the transformer substation inspection robot is autonomously cooperated in a P2P mode; the transformer substation inspection robot directly requests the resources of the inspection cloud center service; the transformer substation inspection robots in the covered area are coordinated in the adjacent area through interconnection of edge servers, and a certain task related to a real-time position is completed together; and constructing a cloud/edge collaborative representation model by adopting the topological representation of the computing unit.
Wherein, the step a) comprises the following specific steps:
step a1), the topological representation takes the edge computing units of the inspection robot as the vertexes, the inclusion relationship between the unit pairs is the weight, the coupling degree between the units is computed, and the cooperative relationship is depicted in the form of a weighted graph, so that the association and the association strength between the computing units are simply and visually represented.
Step a2), the inspection robot computing unit ecu is modeled by public attributes and trust attributes:
ecu=(cp,tr) (1)
wherein cp represents the common attribute of the ecus, the attribute description which all the computing elements have, such as the ID, name and description of the ecus, the function and category of the ecus, and the like;
step a3), the trust degree tr represents the trust attribute of the ecus, and the trust degree is expressed in the information interaction process when the inspection robot edge computing complex and the edge computing executive body are constructed.
The trust degree tr is modeled by the identity, behavior and capability 3 part, that is, it can be expressed as tr ═ it, bt, ct, it is the identity trust degree possessed by the ecus, it refers to the identity validity determined by the identity reliability guarantee mechanisms of the ecus, such as identity authentication, authorization and authorization delegation, the value of it often has binary property (0, 1), bt is the behavior trust degree, it refers to the organization convention to be followed by the ecus in different complexes in the edge computing environment of the inspection robot, it includes the specific behavior constraint of the edge computing environment of the inspection robot to the edge computing unit, and also includes the interaction specification to be followed in the autonomous cooperation process of the edge computing elements, for example, the shared resources can not be quitted or reduced without consent when the resources are shared, and the attributes of social behaviors, such as mobility and duration, etc.
Step b), under the cloud/edge collaborative representation model, unified virtualization service abstraction is carried out on various resources at the edge end of the transformer substation inspection robot and the like. And constructing a basis of a diversified resource management system, providing a uniform interface for upper-layer application, and shielding the heterogeneity of actual equipment and the system.
Wherein, the step b) comprises the following specific steps:
step b1), virtualizing the calculation node mapping of the substation inspection robot, the inspection communication link mapping and the coordination mapping of the inspection nodes and the virtual links, as shown in fig. 1. Specifically, the first type of mapping to be performed is virtual computing resource mapping, that is, mapping a virtual node to a physical inspection machine; the second kind of mapping to be performed is virtual link mapping, namely according to the implemented mapping relationship between the virtual nodes and the patrol physical nodes, performing extension on the virtual links between patrol nodes with cooperative interaction in the patrol physical network until the mapping work of the cooperative interaction virtual links of all patrol nodes is implemented; the third kind of mapping to be performed is cooperative mapping of the virtual nodes and the virtual links, and the virtual nodes and the virtual links are subjected to unified coordinated mapping by analyzing and predicting the operation condition, so that the purpose of system mapping is finally achieved.
Step b2), performing abstract aggregation on the resources in the physical layer of the bottom layer of the patrol node to form a virtual resource layer. The resource integration technology is utilized to perform abstract normalization on the residual resources, the universal applicability is improved, the unification of heterogeneous hardware resources is realized, and the system can be realized by only meeting constraint conditions without considering bottom layer physical topology when being constructed and directly mapped based on a virtual resource layer.
Step b3), performing service encapsulation on the resources of the substation patrol system. And mapping the attribute and the function of the resource in a cloud side architecture service resource pool, wherein the service is used as an abstract representation of the attribute and the function of the resource. The incidence relation between the resources and the services is established through three ways: top layer decomposition, bottom layer polymerization and intermediate divergence, these three modes are specifically described as follows:
the top layer decomposition mode: starting from a task, decomposing the task flow into a plurality of subtasks according to the sequence, the parallel, the circulation, the selection and the like, and performing step by step until each subtask is the minimum unit with the task meaning, and then associating the subtask with the service to enable the resource to be virtually packaged into the service meeting the subtask.
Mode of bottom layer polymerization: a large number of transformer substation inspection system resources at the bottom layer are virtualized and encapsulated into various services from bottom to top, and services meeting task requirements are abstracted step by using cluster abstraction or other virtualization operation modes. The method has the advantages that any abstracted service can be completed through concrete resources, and has the defects that the bottom layer resources are limited, the service meeting the task requirement can not be abstracted, and the bottom layer resources can not be packaged into the required service meeting the new task in an agile and virtual mode when the new task requirement appears.
Intermediate divergence: the method is diverged from the middle to the bottom layer and the top layer, integrates an idea of the two modes, can give consideration to both task requirements and actual resource capacity, enables services packaged in a resource virtual mode to be directly related to tasks, and is suitable for a dynamic open environment.
And c), establishing a dynamic automatic allocation mechanism of the cloud edge resources according to the requirements based on the abstract services of various resources of the edge end. The resource demand prediction and the hybrid resource supply strategy are combined, the optimal configuration of cloud-edge resources and the elasticity problem in the resource configuration process are solved, and the resource configuration control system has self-adaptive elasticity and provides reliable, flexible and efficient resource allocation service.
Wherein, the step c) comprises the following specific steps:
step c1), the resource request processor of the substation patrol system receives and responds to the user requirements, the requirements are distributed to each resource allocation control system, the required resources are found, matched and returned within the control field range, then the required resources are submitted to the resource request processor by the resource allocation control system, and finally the required resources are sent to the user. The resource allocation control system of the adaptive elasticity capability is the core for realizing reasonable resource allocation, and the composition structure of the system is shown in fig. 2.
Step c2), the resource allocation control system of the substation patrol system mainly comprises a monitoring module, a prediction module and a resource allocation module, wherein the monitoring module is mainly responsible for calculating running state monitoring and collecting resource use state data; the prediction module predicts the calculation resource load in the next time period by using the data collected by the monitoring module; the resource allocation module comprehensively analyzes the current resource load value acquired from the monitoring module and the resource load value in the next time period acquired by the prediction module, and the resource allocation module adopts a resource allocation strategy based on hybrid elastic control by using the current and predicted calculation resource demand, implements an adaptive elastic resource allocation method combining active control and passive reaction, and realizes effective utilization of resources.
Step c3), predicting the resource load of the substation patrol system. And automatically acquiring the optimal cluster number by a resource load unsupervised clustering algorithm based on a Hidden Markov Model (HMM). And automatically determining the optimal HMM model of the given data set and the corresponding optimal clustering form by utilizing a Bayesian Information Criterion (BIC) and an Akaike Information Criterion (AIC). On the basis, a cluster with the highest similarity is found by matching current cloud computing load data, historical data in the cluster are input into an Elman neural network optimized based on a genetic algorithm for network training, and finally a load predicted value is obtained.
Step c4), and carrying out substation patrol system resource self-adaptive flexible configuration based on demand prediction. And after receiving the current load information provided by the monitoring module and the resource demand change information of the next time period provided by the prediction module, the resource allocation module integrates the information of the current load information and the resource demand change information to perform adaptive elastic resource allocation based on the combination of active control and passive reaction.
And d), when the cloud edge resource allocation exceeds the requirement, establishing the computing resource sharing of the transformer substation inspection robot, and improving the utilization rate of network edge resources. And (4) modeling an excitation mechanism, and converting the optimal resource utilization problem into the problem of obtaining the highest excitation.
Wherein, the step d) comprises the following specific steps:
step d1), establishing a utility function, and quantitatively analyzing the performance of each edge inspection robot. The utility function mainly measures the related benefits and operation overhead of collaboration through shared resources. And the substation patrol system determines whether to unload the task to the edge node according to the processing delay and the energy expenditure. If the task is offloaded to the edge node, the edge cloud manager will decide whether to perform the computations locally or send them to the remote cloud center according to the requirements of the user and the task, the own computing power and workload, the available incentives, and the like.
Step d2), establishing a fair distribution incentive strategy. The substation inspection robots get incentives by sharing their remaining resources. Let r be (r)1,…rv) Assign vectors to the excitation, each element rkThe distribution ratio value available from the total value of the excitation of the whole edge cloud l is used for the current node k. If the sum of the incentives of all edge nodes is equal to the maximum value of the edge cloud incentives, r is called efficient incentive distribution, which can be achieved under the most ideal edge cloud structure. Furthermore, if for all possible edge clouds l ', their corresponding r ' satisfy r ' ≦ r, the incentive allocation vector r is said to be non-obstructive, i.e. the edge cloud l is a blocking federation, which hinders the possibility of other federation constructions. The current incentive allocation vector is fair if there are no circumstances where higher incentives can be obtained by sharing resources to other edge clouds. Therefore, an optimization problem can be constructed according to the above conditions, and the solution thereof corresponds to an optimal edge cloud architecture, so that the conditions of maximally utilizing all edge node resources and fairly distributing excitation can be met.
Step d3), constructing a distributed edge cloud based on the league game. The edge cloud construction problem can be analogized to a coalition construction problem, and a distributed edge cloud construction algorithm is provided according to the dynamic coalition construction algorithm. Specifically, a potential blocking federation l is found by the edge cloud manager FCM, for each edge node k within which it stays within the same federation with a probability of 1- ρ, and another federation is selected with a probability of ρ. By adopting the decentralization scheme, the owner of the edge node can be effectively ensured to have autonomous decision right on whether to cooperate, and the dynamic communication process of the node can be better captured, so that resources are contributed to the edge cloud which can provide the highest incentive. The core idea of the algorithm is to define a Markov chain of a finite-state machine and predict the recovery process of each node. The standard result of the markov chain indicates that, regardless of the initial condition, when n tends to infinity, the reversion process converges to a set of repeated states after n iterations, and the final reached traversal state is specifically determined by the initial state. The final state obtained in the reply process does not necessarily guarantee that a solution to the optimization problem of edge cloud construction is obtained, and therefore, interference is introduced, i.e., nodes are allowed to hopefully obtain higher excitation, so that a suboptimal strategy is selected by deviating from the optimal strategy with a small probability. In the case of joining other edge clouds, the node is required to obtain the expected remaining incentives of the edge cloud resulting from its joining. In the specific algorithm implementation, the aspects of decentralization, complexity, sampling time, characteristic function parameters and the like are also considered so as to fit the actual use condition and measure the potential operation overhead.
And e), realizing a cloud/edge migration algorithm based on deep reinforcement learning, and establishing a real-time migration mechanism of cloud edge resources by adopting a reinforcement learning scheduling method combined with a deep neural network and a Monte Carlo Tree Search (MCTS) method so as to efficiently respond to a cloud/edge resource sharing request.
Wherein, the step e) comprises the following specific steps:
step e1), constructing a strategy gradient reinforcement learning method based on the MCTS. A Policy Gradient (Policy Gradient) -based reinforcement learning method adopts a Policy network model to predict scheduling actions from tasks to computing nodes so as to reduce expected operation completion time of DAG tasks. The strategy searching efficiency is improved by using a method combining reinforcement learning and Monte Carlo Tree Searching (MCTS) so as to enhance the quality of the strategy network model for predicting the output scheduling strategy. Assuming that the stochastic scheduling policy (stochastic policy) is denoted as π (S | G; θ), the scheduling policy network model is denoted as fθ(G) The probability vector of the scheduling policy network model for predicting the output scheduling action is represented as a, i.e. a ═ fθ(G) In that respect Using current fθAnd the output prediction scheduling action and the MCTS method search a plurality of task scheduling sequence samples so as to obtain a new strategy probability pi (S | G; theta). New strategy compared to original fθThe output strategy has a higher-quality scheduling result (the task operation completion time is shorter), and the MCTS method can improve the prediction quality of the strategy network more quickly. Then, updating a scheduling strategy network model parameter theta based on a random gradient descent method, so that the updated fθ(G) The output of (a) is closer to the new scheduling strategy probability pi (S | G; theta) obtained by the MCTS method, i.e. a loss function is optimized, wherein c denotes that the L2 regularization parameter prevents overfitting.
l=-πTloga+c||θ||2 (1)
Step e2), designing a scheduling strategy network based on the seq2seq model. The scheduling strategy network structure design is mainly based on a seq2seq network model combined with long-time memory LSTM and attention mechanism. As shown in fig. 3, the network structure of the scheduling policy based on the seq2seq model is shown. The strategy network predicts the scheduling strategy according to the input information (including the hardware resource state in the system and the information of the task to be scheduled), and predicts the output scheduling strategy, namely the mapping relation of the task scheduling to the computing node. The network model based on the seq2seq has the characteristic of variable length of input data and is suitable for application scenarios with different lengths of task scheduling sequences. The scheduling strategy network predicts the scheduling strategy of the DAG task based on the information provided by the hardware resource topological graph and the DAG graph, and executes the scheduling strategy in the actual heterogeneous computing system to obtain reward feedback of the operation completion time. And then updating parameters of the strategy network based on the strategy gradient reinforcement learning method of the MCTS so as to improve the task scheduling expectation reward output by the next scheduling strategy network prediction. And optimizing and improving the network structure by an iterative method, and improving the quality of a network model prediction scheduling strategy.
Step e3), based on DAG graph layering, reducing the length of the task sequence to be scheduled input into the strategy network each time. When the size of the DAG task graph is large (thousands of task nodes), the length of the task sequence to be scheduled input into the strategy network is too long. The decision accuracy of the strategy network of the Seq2Seq model for long sequences is degraded. Therefore, the step can optimize the scheduling of the long-sequence task and improve the training efficiency of the seq2seq strategy network model. As shown in fig. 4, each scheduling cycle only includes a part of schedulable (scheduled) tasks based on the dependency relationship between DAG tasks, for example, task 7 can enter the schedulable sequence only after task 3 is scheduled (scheduled). Therefore, the task sequence to be scheduled input into the policy network can be represented as a schedulable task sequence and a non-schedulable (non-schedulable) task sequence in each scheduling period. By using the schedulable task sequence as the input information of the strategy network, the training efficiency of the strategy network and the prediction speed of the scheduling strategy in each period can be improved.
And e4), realizing the input and output of the dispatching algorithm of the substation patrol system. The input and output of the policy network mainly include an encoder RNN and a decoder RNN. Input information of an encoder RNN (recurrent neural network) comprises a hardware resource topological state sequence and a DAG task sequence to be scheduled, and connection information is used as input of a network model in an embedding (embedding) mode. The embedding of each heterogeneous computing node comprises resource state information such as computing capability type, memory capacity, memory and network bandwidth size and topological relation information of adjacent nodes. The embedding of each task includes information of task type, data transfer size, and neighboring tasks. The decoder is an LSTM (long-short time memory) unit based on attention mechanism, and the length of the output sequence is equal to that of the DAG task sequence to be scheduled. The decoder outputs the mapping relationship of the current task scheduling to the computing node each time. The output scheduling mapping relation is combined with the embedding of the scheduling node to be used as input information of next decoding.
The invention designs an edge computing resource scheduling method for substation inspection. According to the requirements of the transformer substation patrol service on the real-time performance of the multi-sensor patrol robot, the cloud-edge resource unified management and the like, the design of the transformer substation patrol system based on edge computing on the aspects of resource representation, service abstraction, allocation on demand, resource sharing and migration mechanisms and the like is solved through the cooperation relation resource description in a topological form, the virtualization technology based on multi-dimensional mapping, the multi-angle fine-grained service abstraction technology, the resource dynamic automatic allocation on demand technology with self-adaptive elasticity capability, the resource sharing mechanism based on distributed alliance game, the scheduling algorithm based on the combination of deep learning and reinforcement learning and the like, and the network flow and delay cost are minimized, so that the real-time performance of the transformer substation patrol system is ensured.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An edge computing resource scheduling method for substation inspection is characterized by comprising the following steps:
step a), modeling edge calculation and cloud collaborative representation: the cloud edge collaborative relationships are classified into the following 4 types: the transformer substation inspection robot is cooperated with the edge server; the transformer substation inspection robot is autonomously cooperated in a P2P mode; the transformer substation inspection robot directly requests the resources of the inspection cloud center service; the transformer substation inspection robots in the covered area are coordinated in the adjacent area through interconnection of edge servers, and a certain task related to a real-time position is completed together; adopting a computing unit topological representation to construct a cloud/edge collaborative representation model;
step b), under a cloud/edge collaborative representation model, uniformly virtualizing, serving and abstracting various resources at the edge end, and providing a uniform interface for upper-layer application by constructing a diversified resource management system, so as to shield the heterogeneity of actual equipment and a system;
step c), establishing a dynamic automatic allocation mechanism of the cloud edge resources according to requirements based on abstract services of various resources at the edge end;
step d), when the cloud-edge resource allocation exceeds the demand, establishing the computing resource sharing of the substation inspection robot, improving the utilization rate of network edge resources, and specifically, converting the optimal resource utilization rate problem into the highest excitation problem through a modeling excitation mechanism;
and e), realizing a cloud/edge migration algorithm based on deep reinforcement learning, and establishing a real-time migration mechanism of cloud edge resources by adopting a reinforcement learning scheduling method combined with a deep neural network and a Monte Carlo Tree Search (MCTS) method so as to efficiently respond to a cloud/edge resource sharing request.
2. The method for scheduling the edge computing resource for the substation patrol according to claim 1, wherein the step a) comprises the following specific steps:
step a1), topologically representing that the inspection robot computing units are used as vertexes, the inclusion relationship between unit pairs is weight, the coupling degree between the units is computed, the cooperative relationship is depicted in a form of a weighted graph, and the association strength between the computing units are simply and visually represented;
step a2), the inspection robot computing unit ecu is modeled by public attributes and trust attributes:
ecu=(cp,tr) (1)
wherein cp represents the common attribute of the ecus, and all the calculation elements have attribute description;
step a3), calculating the trust degree tr of the unit ecu;
the trust degree tr is modeled by an identity, behavior and capability 3 part, and is represented as tr ═ t, bt, ct, it is the identity trust degree possessed by the ecus, indicates the identity validity determined by the identity reliability guarantee mechanism of the ecus about identity authentication, authorization and authorization delegation, and takes a binary value (0, 1), bt is the behavior trust degree, indicates the organization convention to be followed by the ecus in different complexes in the edge computing environment of the inspection robot, and includes the specific behavior constraint of the edge computing environment of the inspection robot on the edge computing unit and the interaction specification to be followed in the autonomous coordination process of the edge computing elements.
3. The method for scheduling the edge computing resource for the substation patrol according to claim 1, wherein the step b) comprises the following specific steps:
step b1), virtualizing the calculation node mapping of the transformer substation inspection robot, the inspection communication link mapping and the coordination mapping of the inspection nodes and the virtual links;
step b2), performing abstract aggregation on resources in the physical layer at the bottom layer of the patrol node to form a virtual resource layer;
step b3), performing service encapsulation on the resources of the substation patrol system: the method comprises the steps of mapping the attribute and the function of the resource in a cloud side architecture service resource pool, using the service as an abstract representation of the attribute and the function of the resource, and establishing an association relationship between the resource and the service through three modes, namely top decomposition, bottom aggregation and intermediate dispersion.
4. The method for scheduling edge computing resources for substation patrol according to claim 3, wherein the step b3) of establishing the association relationship between the resources and the services in three ways of top-level decomposition, bottom-level aggregation and intermediate dispersion specifically comprises:
the top layer decomposition mode: starting from a task, decomposing the task into a plurality of subtasks according to sequence, parallel, circulation and task selection processes, and performing step by step until each subtask is a minimum unit with task meaning, and then associating the subtask with a service to enable resources to be virtually packaged into the service meeting the subtask;
mode of bottom layer polymerization: virtualizing and packaging a large number of transformer substation inspection system resources at the bottom layer into various services from bottom to top, and gradually abstracting the services meeting the task requirements by using a clustering abstraction or other virtualization operation modes;
intermediate divergence: the method has the advantages that the method is diverged from the middle to the bottom layer and the top layer, two modes of decomposition of the top layer and aggregation of the bottom layer are combined, the task requirements and the actual capacity of resources can be considered, and the services packaged in the virtual resource are directly related to the tasks.
5. The method for scheduling edge computing resources for substation patrol according to claim 1, wherein the step c) comprises the following specific steps:
step c1), the resource request processor of the patrol system of the transformer substation receives and responds to the user requirements, distributes the requirements to each resource allocation control system, finds, matches and returns the required resources within the control field range, and then the resource allocation control system submits the required resources to the resource request processor, and finally the required resources are sent to the user;
step c2), the resource allocation control system of the substation patrol system mainly comprises a monitoring module, a prediction module and a resource allocation module, wherein the monitoring module is mainly responsible for calculating running state monitoring and collecting resource use state data; the prediction module predicts the calculation resource load in the next time period by using the data collected by the monitoring module; the resource allocation module comprehensively analyzes the current resource load value acquired from the monitoring module and the resource load value in the next time period acquired by the prediction module, and the resource allocation module adopts a resource allocation strategy based on hybrid elastic control by using the current and predicted calculation resource demand, implements an adaptive elastic resource allocation method combining active control and passive reaction, and realizes effective utilization of resources;
step c3), predicting the resource load of the substation patrol system to obtain a load predicted value;
step c4), carrying out self-adaptive flexible configuration of substation patrol system resources based on demand prediction: and after receiving the current load information provided by the monitoring module and the resource demand change information of the next time period provided by the prediction module, the resource allocation module integrates the information of the current load information and the resource demand change information to perform adaptive elastic resource allocation based on the combination of active control and passive reaction.
6. The method for scheduling edge computing resources for substation patrol according to claim 1, wherein the step d) comprises the following specific steps:
step d1), establishing a utility function, and quantitatively analyzing the performance of each edge node;
step d2), establishing fairnessAssigned incentive policy: the substation inspection robots obtain excitation by sharing the remaining resources, and let r ═ r (r)1,…rv) Assign vectors to the excitation, each element rkFor the distribution ratio value which can be obtained by the current node k from the total excitation value of the whole edge cloud l, if the sum of the excitations of all the edge nodes is equal to the maximum value of the excitation of the edge cloud, r is called efficient excitation distribution;
step d3), constructing a distributed fringe cloud based on the league game: finding a potential blocking federation l through an edge cloud manager FCM, wherein for each edge node k in the potential blocking federation l stays in the same federation with a probability of 1-rho, and selecting another federation with a probability of rho.
7. The method for scheduling the edge computing resource for the substation patrol according to claim 1, wherein the step e) comprises the following specific steps:
step e1), constructing a strategy gradient reinforcement learning method based on MCTS;
step e2), designing a scheduling strategy network based on the seq2seq model;
step e3), based on DAG graph layering, reducing the length of the task sequence to be scheduled input into the strategy network each time;
and e4), realizing the input and output of the dispatching algorithm of the substation patrol system.
8. The method for scheduling the edge computing resource for the substation patrol according to claim 7, wherein the step e1) is specifically as follows:
assuming that the stochastic scheduling policy (stochastic policy) is denoted as π (S | G; θ), the scheduling policy network model is denoted as fθ(G) The probability vector of the scheduling policy network model for predicting the output scheduling action is represented as a, i.e. a ═ fθ(G) Using the current fθThe output prediction scheduling action, MCTS method searches a plurality of task scheduling sequence samples to obtain new strategy probability pi (S | G; theta), then, based on the random gradient descending method, the scheduling strategy network model parameter theta is updated, so that the updated fθ(G)The output of (a) is closer to the probability of the new scheduling strategy pi (S | G; theta) obtained by the MCTS method, i.e. the following loss function is optimized, wherein c denotes that the L2 regularization parameter prevents overfitting:
l=-πTloga+c||θ||2 (1)。
9. the method for scheduling the edge computing resource for the substation patrol according to claim 7, wherein the step e2) is specifically as follows:
the scheduling strategy is predicted by the strategy network according to input information, the input information comprises hardware resource states in the system and information of tasks to be scheduled, the output scheduling strategy is predicted, namely the mapping relation of task scheduling to computing nodes, the scheduling strategy network predicts the scheduling strategy of the DAG tasks based on the information provided by the hardware resource topological graph and the DAG graph, the scheduling strategy is executed in the actual heterogeneous computing system through the scheduling strategy, reward feedback of operation completion time is obtained, and then parameters of the strategy network are updated by a strategy gradient reinforcement learning method based on MCTS, so that the task scheduling expectation reward output by the next scheduling strategy network prediction is improved.
10. The method for scheduling the edge computing resource for the substation patrol according to claim 7, wherein the step e4) is specifically as follows:
the input and output of the strategy network comprise an encoder RNN and a decoder RNN, the input information of the encoder RN comprises a hardware resource topological state sequence and a DAG task sequence to be scheduled, the connection information is used as the input of a network model in an embedding mode, the embedding of each heterogeneous computing node comprises resource state information such as computing capacity type, memory capacity, memory and network bandwidth size and the like, and topological relation information of adjacent nodes, and the embedding of each task comprises task type, data transmission size and information of adjacent tasks; the decoder is a long-time memory unit based on an attention mechanism, the output sequence length is equal to the length of a DAG task sequence to be scheduled, the decoder outputs the mapping relation of the current task scheduling to the computing node each time, and the output scheduling mapping relation is combined with the embedding of the scheduling node and serves as input information of the next decoding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110569247.5A CN113282368B (en) | 2021-05-25 | 2021-05-25 | Edge computing resource scheduling method for substation inspection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110569247.5A CN113282368B (en) | 2021-05-25 | 2021-05-25 | Edge computing resource scheduling method for substation inspection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113282368A true CN113282368A (en) | 2021-08-20 |
CN113282368B CN113282368B (en) | 2023-03-28 |
Family
ID=77281284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110569247.5A Active CN113282368B (en) | 2021-05-25 | 2021-05-25 | Edge computing resource scheduling method for substation inspection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113282368B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113783726A (en) * | 2021-09-02 | 2021-12-10 | 天津大学 | SLA-oriented resource self-adaptive customization method for edge cloud system |
CN113778677A (en) * | 2021-09-03 | 2021-12-10 | 天津大学 | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling |
CN113823011A (en) * | 2021-09-03 | 2021-12-21 | 深圳云天励飞技术股份有限公司 | Calculation force distribution method of patrol robot and related equipment |
CN114090239A (en) * | 2021-11-01 | 2022-02-25 | 国网江苏省电力有限公司信息通信分公司 | Model-based reinforcement learning edge resource scheduling method and device |
CN114500530A (en) * | 2021-12-31 | 2022-05-13 | 北方信息控制研究院集团有限公司 | Automatic adjustment method for civil edge information system |
CN114827153A (en) * | 2022-07-04 | 2022-07-29 | 广东电网有限责任公司肇庆供电局 | Method, device and system for selecting edge server in edge computing cooperative system |
CN115378498A (en) * | 2021-11-22 | 2022-11-22 | 中国人民解放军战略支援部队信息工程大学 | Multi-user visible light communication low-delay transmission and calculation integrated system |
CN115421930A (en) * | 2022-11-07 | 2022-12-02 | 山东海量信息技术研究院 | Task processing method, system, device, equipment and computer readable storage medium |
CN115967175A (en) * | 2022-11-30 | 2023-04-14 | 广州汇电云联互联网科技有限公司 | Edge end data acquisition control device and method for energy storage power station |
CN116341880A (en) * | 2023-05-26 | 2023-06-27 | 成都盛锴科技有限公司 | Distributed scheduling method for column inspection robot based on finite state machine |
CN116894469A (en) * | 2023-09-11 | 2023-10-17 | 西南林业大学 | DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment |
CN117255126A (en) * | 2023-08-16 | 2023-12-19 | 广东工业大学 | Data-intensive task edge service combination method based on multi-objective reinforcement learning |
CN117579625A (en) * | 2024-01-17 | 2024-02-20 | 中国矿业大学 | Inspection task pre-distribution method for double prevention mechanism |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170109841A1 (en) * | 2015-10-16 | 2017-04-20 | Andrija Sadikovic | Method and system for aggregation and control of energy grids with distributed energy resources |
CN111401744A (en) * | 2020-03-17 | 2020-07-10 | 重庆邮电大学 | Dynamic task unloading method under uncertain environment in mobile edge calculation |
CN111709582A (en) * | 2020-06-18 | 2020-09-25 | 广东电网有限责任公司 | Method and system for dynamically optimizing edge computing resources of unmanned aerial vehicle and storage medium |
CN112350441A (en) * | 2020-11-03 | 2021-02-09 | 国网智能科技股份有限公司 | Online intelligent inspection system and method for transformer substation |
CN112367354A (en) * | 2020-10-09 | 2021-02-12 | 国网电力科学研究院有限公司 | Intelligent scheduling system and scheduling method for cloud-edge resource graph |
-
2021
- 2021-05-25 CN CN202110569247.5A patent/CN113282368B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170109841A1 (en) * | 2015-10-16 | 2017-04-20 | Andrija Sadikovic | Method and system for aggregation and control of energy grids with distributed energy resources |
CN111401744A (en) * | 2020-03-17 | 2020-07-10 | 重庆邮电大学 | Dynamic task unloading method under uncertain environment in mobile edge calculation |
CN111709582A (en) * | 2020-06-18 | 2020-09-25 | 广东电网有限责任公司 | Method and system for dynamically optimizing edge computing resources of unmanned aerial vehicle and storage medium |
CN112367354A (en) * | 2020-10-09 | 2021-02-12 | 国网电力科学研究院有限公司 | Intelligent scheduling system and scheduling method for cloud-edge resource graph |
CN112350441A (en) * | 2020-11-03 | 2021-02-09 | 国网智能科技股份有限公司 | Online intelligent inspection system and method for transformer substation |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113783726A (en) * | 2021-09-02 | 2021-12-10 | 天津大学 | SLA-oriented resource self-adaptive customization method for edge cloud system |
CN113778677A (en) * | 2021-09-03 | 2021-12-10 | 天津大学 | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling |
CN113823011A (en) * | 2021-09-03 | 2021-12-21 | 深圳云天励飞技术股份有限公司 | Calculation force distribution method of patrol robot and related equipment |
CN113778677B (en) * | 2021-09-03 | 2022-08-02 | 天津大学 | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling |
CN114090239A (en) * | 2021-11-01 | 2022-02-25 | 国网江苏省电力有限公司信息通信分公司 | Model-based reinforcement learning edge resource scheduling method and device |
CN115378498A (en) * | 2021-11-22 | 2022-11-22 | 中国人民解放军战略支援部队信息工程大学 | Multi-user visible light communication low-delay transmission and calculation integrated system |
CN114500530B (en) * | 2021-12-31 | 2023-12-08 | 北方信息控制研究院集团有限公司 | Automatic adjustment method for civil edge information system |
CN114500530A (en) * | 2021-12-31 | 2022-05-13 | 北方信息控制研究院集团有限公司 | Automatic adjustment method for civil edge information system |
CN114827153A (en) * | 2022-07-04 | 2022-07-29 | 广东电网有限责任公司肇庆供电局 | Method, device and system for selecting edge server in edge computing cooperative system |
CN115421930A (en) * | 2022-11-07 | 2022-12-02 | 山东海量信息技术研究院 | Task processing method, system, device, equipment and computer readable storage medium |
CN115967175A (en) * | 2022-11-30 | 2023-04-14 | 广州汇电云联互联网科技有限公司 | Edge end data acquisition control device and method for energy storage power station |
CN115967175B (en) * | 2022-11-30 | 2024-05-10 | 广州汇电云联数科能源有限公司 | Edge data acquisition control device and method for energy storage power station |
CN116341880A (en) * | 2023-05-26 | 2023-06-27 | 成都盛锴科技有限公司 | Distributed scheduling method for column inspection robot based on finite state machine |
CN116341880B (en) * | 2023-05-26 | 2023-08-11 | 成都盛锴科技有限公司 | Distributed scheduling method for column inspection robot based on finite state machine |
CN117255126A (en) * | 2023-08-16 | 2023-12-19 | 广东工业大学 | Data-intensive task edge service combination method based on multi-objective reinforcement learning |
CN116894469A (en) * | 2023-09-11 | 2023-10-17 | 西南林业大学 | DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment |
CN116894469B (en) * | 2023-09-11 | 2023-12-15 | 西南林业大学 | DNN collaborative reasoning acceleration method, device and medium in end-edge cloud computing environment |
CN117579625A (en) * | 2024-01-17 | 2024-02-20 | 中国矿业大学 | Inspection task pre-distribution method for double prevention mechanism |
CN117579625B (en) * | 2024-01-17 | 2024-04-09 | 中国矿业大学 | Inspection task pre-distribution method for double prevention mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN113282368B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113282368B (en) | Edge computing resource scheduling method for substation inspection | |
Ghobaei-Arani et al. | A cost-efficient IoT service placement approach using whale optimization algorithm in fog computing environment | |
Sun et al. | Multi-objective optimization of resource scheduling in fog computing using an improved NSGA-II | |
CN113435472A (en) | Vehicle-mounted computing power network user demand prediction method, system, device and medium | |
CN106101196B (en) | A kind of cloud rendering platform task scheduling system based on probabilistic model | |
Lin et al. | Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing | |
CN113836796A (en) | Power distribution Internet of things data monitoring system and scheduling method based on cloud edge cooperation | |
Li et al. | Collaboration of heterogeneous unmanned vehicles for smart cities | |
Rahbari et al. | Fast and fair computation offloading management in a swarm of drones using a rating-based federated learning approach | |
Xu et al. | Task allocation for unmanned aerial vehicles in mobile crowdsensing | |
Ahmed et al. | IoT-based intelligent waste management system | |
Lv et al. | Multi-robot distributed communication in heterogeneous robotic systems on 5G networking | |
Xiao et al. | Mobile-edge-platooning cloud: a lightweight cloud in vehicular networks | |
CN108521345B (en) | Information physical cooperation method considering communication interruption for island micro-grid | |
Jiao et al. | Service deployment of C4ISR based on genetic simulated annealing algorithm | |
CN116109058A (en) | Substation inspection management method and device based on deep reinforcement learning | |
Masdari et al. | Energy-aware computation offloading in mobile edge computing using quantum-based arithmetic optimization algorithm | |
Li et al. | AttentionFunc: Balancing FaaS compute across edge-cloud continuum with reinforcement learning | |
Ma et al. | AGRCNet: communicate by attentional graph relations in multi-agent reinforcement learning for traffic signal control | |
Habibi et al. | Offering a Demand‐Based Charging Method Using the GBO Algorithm and Fuzzy Logic in the WRSN for Wireless Power Transfer by UAV | |
Du et al. | OctopusKing: A TCT-aware task scheduling on spark platform | |
Yang et al. | Virtual network function placement based on differentiated weight graph convolutional neural network and maximal weight matching | |
Zhang et al. | A hierarchical learning based artificial bee colony algorithm for numerical global optimization and its applications | |
Zhang et al. | Offline reinforcement learning for asynchronous task offloading in mobile edge computing | |
CN113485718B (en) | Context-aware AIoT application program deployment method in edge cloud cooperative system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |