CN114489962A - Task scheduling method, energy router, microgrid system and storage medium - Google Patents

Task scheduling method, energy router, microgrid system and storage medium Download PDF

Info

Publication number
CN114489962A
CN114489962A CN202011268951.9A CN202011268951A CN114489962A CN 114489962 A CN114489962 A CN 114489962A CN 202011268951 A CN202011268951 A CN 202011268951A CN 114489962 A CN114489962 A CN 114489962A
Authority
CN
China
Prior art keywords
task
delay
executed
cloud
energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011268951.9A
Other languages
Chinese (zh)
Inventor
李宇童
华昊辰
曹军威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Toyota Motor Corp
Original Assignee
Tsinghua University
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Toyota Motor Corp filed Critical Tsinghua University
Priority to CN202011268951.9A priority Critical patent/CN114489962A/en
Publication of CN114489962A publication Critical patent/CN114489962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure provides a task scheduling method in an energy internet, an energy router, a micro-grid system and a storage medium. The energy Internet comprises a cloud and at least one microgrid, each microgrid comprises an energy router and at least one energy local area network, each energy local area network comprises an edge server and a plurality of terminal devices, and the task scheduling method comprises the following steps: the energy router acquires relevant information of tasks to be executed by each terminal device in the micro network where the energy router is located; and the energy router formulates a task scheduling scheme according to the acquired related information of the task to be executed so as to optimize the total task delay, wherein the task scheduling scheme of each task comprises the division of a first task part and a second task part of the task. Through the energy router disclosed by the invention, the task load unloaded to the cloud and the edge server is effectively distributed, so that the characteristics of high calculation speed of the cloud and low time delay of the edge side can be utilized, the calculation efficiency is improved, and meanwhile, the task time delay in the energy Internet is optimized.

Description

Task scheduling method, energy router, microgrid system and storage medium
Technical Field
The disclosure relates to the technical field of energy internet, and more particularly, to a task scheduling method, an energy router, a micro-grid system and a storage medium in the energy internet.
Background
In order to meet the challenges of environmental pollution, energy crisis, global warming and the like of human beings at present, a novel energy system capable of fully utilizing renewable energy is widely concerned by academia, industry and even government. In an energy internet scene, as more and more controllers, power monitoring equipment and various electric equipment are connected to the energy internet and terminals of the internet of things are more and more intelligent and diversified, deployment requirements of key functions of model prediction, fault prediction, power control and the like of the energy internet are promoted, the functions have very high requirements on the computing capacity of an operation carrier, and the energy internet is more and more sensitive to task computing time delay. The traditional method is to offload the tasks to a cloud end with strong computing power for execution, however, the traditional cloud computing inevitably causes higher network transmission delay, and if only relevant computing tasks are centralized on a cloud platform for implementation, the processing complexity and the processing delay are difficult to be ensured, which seriously affects the security and stability of the energy internet.
Disclosure of Invention
The present disclosure is provided to solve the above-mentioned drawbacks in the background art. A series of Edge Servers (ES) are deployed on an Edge side close to an equipment End, and a task amount of a terminal Device (ED) in each Energy local area network to execute a task is reasonably distributed and unloaded to a cloud End and the Edge servers, so that the characteristics of high computing speed of the cloud End and low time delay of the Edge side are utilized, and the task time delay in the Energy internet is optimized while the computing efficiency is improved.
According to a first aspect of the present disclosure, a method for task scheduling in an energy internet is provided. The energy internet may include a cloud and at least one microgrid, each microgrid including an energy router and at least one energy local area network, each energy local area network including an edge server and a plurality of terminal devices. The task scheduling method comprises the step that the energy router acquires relevant information of tasks to be executed by each terminal device in the micro-network where the energy router is located. The task scheduling method further comprises the step that the energy router formulates a task scheduling scheme according to the acquired relevant information of the tasks to be executed so as to optimize the total task delay. The task scheduling scheme of each task comprises the division of a first task part and a second task part of the task, wherein the first task part is executed by the cloud end, and the second task part is executed by the corresponding single edge server.
According to a second aspect of the present disclosure, there is provided an energy router for scheduling tasks in an energy internet, which may include a cloud and at least one microgrid, each microgrid including the energy router and at least one energy local area network, each energy local area network including an edge server and a number of terminal devices. The energy router comprises an information acquisition unit which is configured to acquire relevant information of tasks to be executed by each terminal device in a micro-network where the energy router is located. The energy router also comprises a scheme making unit which is configured to make a task scheduling scheme according to the acquired relevant information of the tasks to be executed so as to optimize the total task delay. The task scheduling scheme of each task comprises the division of a first task part and a second task part of the task, wherein the first task part is executed by the cloud end, and the second task part is executed by the corresponding single edge server.
According to a third aspect of the present disclosure, there is provided a microgrid system comprising an energy router according to various embodiments of the present disclosure and at least one energy source local area network.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, perform a method of task scheduling according to various embodiments of the present disclosure.
The task scheduling method of the embodiment of the disclosure constructs a cloud-side cooperative architecture suitable for an energy internet, and in the cloud-side cooperative architecture, an ER is erected in each micro-grid to serve as a task scheduling center of the micro-grid in which the ER is located. Specifically, the ER is used for acquiring relevant information of tasks to be executed by each ED in a micro-network where the ER is located, and a task scheduling scheme is formulated according to the acquired relevant information of the tasks to be executed, so that each task is divided into two parts, one part of the two parts is executed by the cloud, and the other part of the two parts is executed by a corresponding single part. The cloud side has strong computing power and high computing efficiency, but transmission delay is high due to remote network transmission, while the edge side has lower computing power than the cloud side but lower transmission delay, and works in an optimized cooperation mode, so that the total task delay (including but not limited to computing delay and transmission delay) can be optimized. According to the method, not only the tasks are completely unloaded to the edge side or the cloud side, but also cloud-edge cooperation is considered, the ER formulates a task scheduling scheme according to the acquired relevant information of the tasks to be executed, and the task scheduling scheme (particularly the division of a first task part and a second task part) is optimized by taking the optimization of the total task delay as a target, so that one part of each task is unloaded to the cloud side for processing and the rest part is unloaded to the edge side (single ES) closer to the ED for processing according to the scheduling scheme optimized by the total task delay. Therefore, the method improves the computing efficiency through the strong and quick computing capability of the cloud, helps to reduce the processing time (including task computing time delay) of the task, simultaneously utilizes the characteristic of low transmission time delay of the edge side, reduces the total transmission time delay, and optimizes the total task time delay by cooperating with the edge side and the low transmission time delay, thereby ensuring the safety and stability of the energy Internet.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate various embodiments generally by way of example and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Such embodiments are illustrative, and are not intended to be exhaustive or exclusive embodiments of the present apparatus or method.
Fig. 1 is a diagram illustrating a cloud-side collaborative architecture of an energy internet according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a task scheduling method in an energy internet according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an energy router acquiring information related to a task to be performed by a terminal device in a piconet according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an energy router acquiring information related to a task to be performed by a terminal device in a piconet according to another exemplary embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a task scheduling method in an energy internet according to another embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating parallel execution of a first process to offload a first task portion to and executed by a cloud and a second process to offload a second task portion to and executed by an edge router, according to an example embodiment of the present disclosure;
FIG. 7 illustrates an exemplary configuration of an energy router according to an embodiment of the present disclosure; and
fig. 8 is an exemplary block diagram of an energy router according to another embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical aspects of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings. Embodiments of the present disclosure are described in further detail below with reference to the figures and the detailed description, but the present disclosure is not limited thereto. The order in which the various steps described herein are described as examples should not be construed as a limitation if there is no requirement for a context relationship between each other, and one skilled in the art would know that sequential adjustments may be made without destroying the logical relationship between each other, rendering the overall process impractical.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
The disclosure aims to design a method for optimizing total task delay based on a cloud-edge collaborative framework aiming at an energy Internet application scene. Fig. 1 is a cloud-edge collaborative architecture diagram of an energy internet according to an embodiment of the present disclosure. As shown in fig. 1, the energy internet includes a cloud 101 and at least one ss 102, i.e., ss 1, ss 2, …, where N is a natural number. Each piconet 102 may include an Energy Router (ER)103 and at least one energy local area network 104, wherein each piconet 102 is illustrated by a dashed oval and each energy local area network 104 is illustrated by a solid oval. Each energy local area network 104 is, for example, a community or campus or the like, and includes an Edge Server (ES)105 and a plurality of End Devices (EDs) 106. In fig. 1, one ES 105 and two EDs 106 are included in each energy source local area network 104, but this is by way of example only, and while a single ES 105 is typically included in each energy source local area network 104, various numbers of EDs 106 may be included as desired.
In the energy internet, the cloud end 101 may form a "cloud side" of the cloud-side collaborative architecture, which may be composed of a conventional cloud computing center, the ER 103 and the ES 105 in the energy lan 104 may form an "edge side" of the cloud-side collaborative architecture, and the ED106 forms an "end side" of the cloud-side collaborative architecture, which includes but is not limited to a controller, a power status sensor, a security detection device, and various electric devices.
Fig. 2 is a flowchart illustrating a task scheduling method in an energy internet according to an embodiment of the present disclosure. As shown in fig. 2, the task scheduling method may include the following steps 210 and 220. At step 210, information about the tasks to be performed by each ED in the piconet in which the ER resides may be obtained by the ER. In step 220, the ER may formulate a task scheduling scheme according to the acquired relevant information of the task to be executed, so as to optimize the total task delay. The total task latency may be a sum of task latencies of respective tasks to be performed. The task scheduling scheme for each task may include a division of a first task portion of the task, which is executed by the cloud, and a second task portion of the task, which is executed by a corresponding single ES.
With continued reference to fig. 1, assuming that the energy internet includes N (N is an integer greater than 1) piconets 102, which are respectively piconet 1, piconet 2, …, and piconet N, generally only one ER 103 is installed in each piconet 102, and the ER 103 may be used as a task scheduling center of the piconet 102 where it is located.
For example, for any one piconet 102, it may include an ER 103 and several energy local area networks 104. In each energy local area network 104, one ES 105 and several EDs 106 may be included. As an example, each piconet 102 may include several ESs i105, wherein i may be 1, 2, … …, M being the total number of energy local area networks 104 in the ss 102; each piconet 102 may also include several EDs j106, where j may take 1, 2, … …, and O is the total number of EDs 106 in the piconet 102. Note that different sss 102 may include different numbers of energy local area networks 104, different total numbers M of ESs 105, and different total numbers O of EDs 106.
The ER 103 is utilized to provide the respective end devices ED in the piconet 102 where the ER 103 is located1,ED2,……,EDj,……EDOThe task makes a task scheduling scheme, and schedules the cloud 101 and each ESiTo perform the corresponding task.
Specifically, each ED in the piconet 102 where the ER 103 is located is first acquired by the ER 103jInformation about the task to be performed (step 210).
Regarding relevant information about tasks to be performed, ER 103 can be derived from various EDsj(j may take 1, 2, … …, O)106 directly, for example, ER 103 may collect relevant information periodically (at regular intervals) or in real time to obtain each ED j106 information about the task to be performed. As shown in FIG. 3, each ED1,ED2,……,EDj,……,EDOInformation about the task to be performed may also be sent to ER 103 by sending a task request. Further, as shown in FIG. 4, ER 103 may pass through each ESi(i may take 1, 2, … …, M)105 to indirectly obtain the respective EDj(j is preferable)1, 2, … …, O) information about the task to be performed. Specifically, a certain ES 105 may first obtain relevant information about tasks to be performed by each ED106 in the energy local area network 104 where the ES 105 is located, and then forward the relevant information to the ER 103 in the piconet 102 where the ES 105 is located.
It should be noted that the above example is only for illustration, and in the present disclosure, the manner in which the ER 103 obtains the relevant information of the tasks to be executed by each ED106 in the microgrid 102 is not limited thereto.
Next, a task scheduling scheme is formulated by the ER 103 according to the acquired relevant information of the task to be executed to optimize the total task delay (step 220). The total task delay may be a sum of task delays of respective tasks to be executed, and the task scheduling scheme of each task includes division of a first task portion and a second task portion of the task, where the first task portion is executed by the cloud and the second task portion is executed by a corresponding single ES.
According to the task scheduling method, a cloud-side cooperative architecture suitable for the energy Internet is constructed, and in the cloud-side cooperative architecture, an ER 103 is erected in each microgrid 102 to serve as a task scheduling center of the microgrid 102 where the microgrid 102 is located. Specifically, the ER 103 is used to obtain relevant information of tasks to be executed by each ED106 in the microgrid 102 where the ER 103 is located, and a task scheduling scheme is formulated according to the obtained relevant information of the tasks to be executed, so that each task is divided into two parts, one part of the two parts is executed by the cloud 101, and the other part is executed by a corresponding single ES 105. The cloud side has strong computing power and high computing efficiency, but transmission delay is high due to remote network transmission, while the edge side has lower computing power than the cloud side but lower transmission delay, and works in an optimized cooperation mode, so that the total task delay (including but not limited to computing delay and transmission delay) can be optimized. According to the method, not only the task is completely unloaded to the edge side or the cloud side, but also the cloud-edge cooperation is considered, the ER 103 formulates a task scheduling scheme according to the acquired relevant information of the task to be executed, and optimizes the task scheduling scheme (particularly the division of a first task part and a second task part) by taking the optimization of the total task delay as a target, so that one part of each task is unloaded to the cloud end 101 for processing and the rest is unloaded to the edge side (a single ES 105) closer to the ED106 for processing according to the scheduling scheme optimized by the total task delay. Therefore, the method improves the computing efficiency through the strong and quick computing capability of the cloud end 101, helps to reduce the processing time (including task computing time delay) of the task, simultaneously utilizes the characteristic of low transmission time delay of the edge side, reduces the total transmission time delay, and optimizes the total task time delay by cooperating with the edge side and the low transmission time delay, thereby ensuring the safety and stability of the energy Internet.
In some embodiments of the present disclosure, the information related to the task to be executed includes a data transmission amount and a task processing amount of the task to be executed, and then the ER 103 may calculate, according to the data transmission amount and the task processing amount, a network transmission delay required for transmitting the task to be executed to the cloud end 101 or the ES 105 and a task calculation delay required for processing by the cloud end 101 or the ES 105, and partition the task according to the network transmission delay and the task processing amount, so as to obtain a smaller total task delay.
In some embodiments of the present disclosure, the task scheduling method may further include: the first process of unloading the first task part to the cloud end and executing the first task part by the cloud end and the second process of unloading the second task part to the corresponding single edge server and executing the second task part by the single edge server are performed in parallel, so that the task delay of each task to be executed can take a large value of the cloud-side task execution delay and the edge-side task execution delay, and the total task delay can be further reduced compared with the non-parallel operation of the first process and the second process.
In some embodiments of the present disclosure, referring back to fig. 1, the ER 103 allocates only the tasks to be executed by the ED106 to the ESs 105 under the same piconet 102 for processing, that is, in the task scheduling scheme formulated by the ER 103, the ES 105 in the piconet 102 where the ER 103 is located is set as the corresponding single ES 105 for executing the second task portion. To illustrate this, taking the 1 st piconet 102 as an example, the ER 103 under the 1 st piconet 102 is based on the acquired EDsj(j may take 1, 2, … …, O)106 information about the tasks to be executed to formulate a task scheduling scheme, with only the second of each taskTask part is allocated to ES under the 1 st microgrid 102i(i may take 1, 2, … …, M)105 without regard to the ESs 105 allocated under other sss 102 than the 1 st ss 102. Therefore, the advantage of short transmission distance at the edge side can be better played, and smaller network transmission delay is obtained.
Fig. 5 is a flowchart illustrating a task scheduling method in an energy internet according to another embodiment of the present disclosure. As shown in fig. 5, the task scheduling method may include step 510, step 520, and step 530.
At step 510, information about the tasks to be performed by each ED within the piconet in which the ER resides may be obtained by the ER. This step is similar to step 210 in the embodiment shown in FIG. 2, and is not repeated here.
In step 520, a task scheduling scheme may be formulated by the ER according to the acquired relevant information of the task to be executed, considering the working condition of each ES and the distance between each ED of the task to be executed and each ES. The task scheduling scheme of each task may include a division of a first task portion and a second task portion of the task, where the first task portion is executed by the cloud and the second task portion is executed by the corresponding single ES.
Further, the task scheduling scheme also includes the ratio of the first task part and the second task part and the corresponding single ES 105.
In this step, not only relevant information (such as data transmission amount and task processing amount) of the task to be performed, but also working conditions such as computing power and computing resources of the respective ESs 105 and the distance between the ED106 to perform the task and the respective ESs 105 are considered. The ER 103 is enabled to make an optimal task scheduling policy, which includes, for example, how many parts of each ED106 to be executed by the task are offloaded to the cloud 101 for processing, and which ES 105 the rest parts are offloaded to for processing, and how much computing resources are allocated by the corresponding ES 105 to the task, so that not only the total task latency can be reduced, but also the task scheduling policy does not exceed the computing power and the computing resource load capability of each ES 105.
In some embodiments, this step 520 may specifically determine a sum of a network transmission delay (which may be related to a distance between each ED106 and each ES 105) and a task computation delay (which may be related to a working condition of each ES 105) on the edge side as an edge-side task execution delay, determine a sum of a network transmission delay and a task computation delay on the cloud side as a cloud-side task execution delay, and formulate a task scheduling scheme via an optimization model according to the acquired relevant information of the task to be executed. Wherein the optimization model takes the total task delay as an objective function, for example, takes the total task delay not exceeding a threshold as an objective, or takes the minimized total task delay as an objective, and so on, and the optimization model can take the computing power of each ES 105 as a constraint condition to ensure that the computing power of each ES 105 is not exceeded.
Further, in some embodiments, the optimization model may also use the computing power of the cloud 101 and the tolerance of each task to its respective task delay as constraints, and the objective function may be designed as a convex function. By designing the objective function as a convex function, the above process is converted into a convex optimization problem which is easy to solve through mathematical description, and the task scheduling scheme can be formulated by solving the convex optimization problem. Specifically, for example, the Optimization model may be solved through a Particle Swarm Optimization (Particle Swarm Optimization) or machine learning, so as to obtain an optimal task scheduling scheme, so as to optimize the total task execution delay.
At step 530, a first process of offloading the first task portion to the cloud and executing it and a second process of offloading the second task portion to the corresponding single ES and executing it are performed in parallel.
After formulating the task scheduling scheme, the ER 103 may control to offload a first task part of each task to be executed to and executed by the cloud 101 (a first process), and offload a second task part to and executed by the corresponding single ES 105 (a second process), where since the first process and the second process are performed in parallel, the task latency of each task to be executed is a large value of the cloud-side task execution latency and the edge-side task execution latency.
For a process that controls a first process and a second process to proceed in parallel, as shown in fig. 6, a signal including a task scheduling scheme may be sent to the ED106 having a task to be executed through the ER 103, so that the ED106 may offload a first task portion to the cloud 101 and a second task portion to a corresponding single ES 105 in parallel according to the task scheduling scheme, and the cloud 101 and the corresponding single ES 105 may execute the first task portion and the second task portion in parallel.
The process of modeling and solving the optimization model will be described in detail below with specific embodiments. In this specific embodiment, modeling of the optimization model is performed by taking a microgrid as an example, and the optimization model is solved by taking a particle swarm optimization algorithm as an example.
For convenience of explanation, each element in the task scheduling method is defined. First, I is defined, J represents the set of ES and ED under a piconet, and the jth ED under the piconet can be expressed as EDjWherein J is equal to J, the ith ES under the microgrid can be expressed as ESiWherein I belongs to I; definition of wjIs EDjTo describe the relevant information of the task request, let wj={tj,cjWhere t isjIndicating that a task w is to be performedjAmount of data transmission of cjIndicating that a task w is to be performedjThe task throughput of (2).
In a cloud-edge coordination scheme according to various embodiments of the present disclosure, the coordination scheme may be performed by various EDsjAnd sending a task request to an ER (usually, a single ER is erected in one microgrid) under the same microgrid, wherein the task request comprises relevant information about a task to be executed. The ER may be used as a task scheduling center of the microgrid, and divides each task into two parts, wherein a first task part is unloaded to the cloud for execution, and a second task part is unloaded to a certain ES on the edge side for execution.
To show how tasks are divided, order
Figure BDA0002777000190000091
And
Figure BDA0002777000190000092
respectively representing tasks w to be performedjThe occupation ratios of the parts respectively executed to the cloud side and the edge side are uploaded, and therefore,
Figure BDA0002777000190000093
for convenience of formula derivation, the following
Figure BDA0002777000190000094
Unified use
Figure BDA0002777000190000095
Is expressed, i.e.
Figure BDA0002777000190000096
Next, the ER depends on the relevant information (e.g., data transmission t) of the task to be performedjAnd task throughput cjEtc.), each ESiAnd ED and each ES to perform the taskiThe distance between the edge side and the edge side, the part of the task offloaded to the edge side is allocated to the most suitable ES processing.
Therefore, a matrix X is defined to represent the mapping between the task requests and the ESs of the EDs under the same piconet. The matrix X is an I × J matrix and defines Xi,jIs the value of the ith row and jth column of the matrix X. x is the number ofi,jWhen 1, ED is representedjOffloading tasks to an ESiGo up to execute, and xi,jWhen equal to 0, it means EDjNot offloading tasks to ESiThe upper-going execution is as follows:
Figure BDA0002777000190000097
and sigma, considering that each ED's task is uploaded to only one ES for executioni∈I xi,j=1。
In order to obtain the optimized task delay, the task delay needs to be modeled. It is to be noted that the processes of uploading the tasks to the cloud side and the edge side may be parallel (that is, they are performed simultaneously without interfering with each other), so that the delay models uploaded to the cloud side and the edge side need to be modeled separately.
(1) Modeling task execution delay model on edge side
The task execution delay of the edge side includes two aspects, namely, the time consumed by the network transmission (i.e., the network transmission delay of the edge side, which is formed by
Figure BDA0002777000190000098
Representation) and the time consumed by task computation (task computation latency on the edge side, which is determined by
Figure BDA0002777000190000099
Representation). Since the transmission and execution are serial, i.e. the tasks are first transmitted to the corresponding ES and then calculated, the task w is executedjThe task execution latency offloaded to the edge side is the sum of the time spent by both. Thus, the task execution latency on the edge side is defined as
Figure BDA00027770001900000910
And is
Figure BDA00027770001900000911
Network transmission delay at edge side
Figure BDA0002777000190000101
The design takes into account the time required for data transmission
Figure BDA0002777000190000102
And the time required for data propagation
Figure BDA0002777000190000103
According to the principle of network communication, the transmission delay of the edge side network
Figure BDA0002777000190000104
Wherein c represents the transmission rate of the network channelRate, ri,jIndicating terminal equipment EDjWith edge server ESiLength of channel between, BjRepresenting the channel bandwidth, tjIndicating said task w to be performedjThe amount of data transmission.
Task computation time delay at edge side
Figure BDA0002777000190000105
Task computation time delay
Figure BDA0002777000190000106
By the ratio of the task throughput to the execution rate, i.e.
Figure BDA0002777000190000107
Wherein p isi,jRepresenting edge servers ESiIs assigned to the task w to be performedjThe execution rate of (a) is determined,
Figure BDA0002777000190000108
representing edge servers ESiAmount of task processing to be performed, cjIndicating said task w to be performedjThe task throughput of (2).
Thus, task execution latency on the edge side
Figure BDA0002777000190000109
Can be expressed as formula (1):
Figure BDA00027770001900001010
(2) modeling task execution delay model on cloud side
The cloud-side task execution latency model includes two aspects, namely the time consumed by network transmission (i.e., the cloud-side network transmission latency, which is defined by the time consumed by the network transmission
Figure BDA00027770001900001011
Representation) and the time consumed by task computation (i.e., the cloud-side task computation latency, which is determined by
Figure BDA00027770001900001012
Representation). Since both the transmission and the computation are serial, i.e. the task is transmitted to the corresponding cloud server first, and then the computation is performed, the task wjThe task execution latency offloaded to the cloud side is the sum of the time spent by the two. Thus, defining the task execution latency on the cloud side as
Figure BDA00027770001900001013
And is
Figure BDA00027770001900001014
Network transmission delay of cloud side
Figure BDA0002777000190000111
Define ED according to the communication principlejNetwork transmission delay to cloud
Figure BDA0002777000190000112
SjIndicating terminal equipment EDjNetwork channel transmission rate to the cloud.
Task computing time delay on cloud side
Figure BDA0002777000190000113
Task computation time delay
Figure BDA0002777000190000114
By the ratio of task throughput to computation speed, i.e.
Figure BDA0002777000190000115
Wherein
Figure BDA0002777000190000116
Assignment of a processor representing the cloud to a task w to be performedjThe computational resources (execution rate).
Thus, task execution latency on the cloud side
Figure BDA0002777000190000117
Can be expressed as formula (2):
Figure BDA0002777000190000118
definition of di,jTo execute task wjThe task latency of. In the process of task allocation, because the processes of task uploading to the cloud side and the edge side are parallel, the task wjTask delay di,jThe task execution delay of the first part of tasks uploaded to the cloud side and the task execution delay of the second part of tasks uploaded to the edge side are the large values. Therefore, the temperature of the molten metal is controlled,
Figure BDA0002777000190000119
wherein the content of the first and second substances,
Figure BDA00027770001900001110
representing the task execution latency of the first part of the task uploaded to the cloud,
Figure BDA00027770001900001111
and representing the execution time delay of the task uploaded to the second part of the task on the edge side.
Substituting the expressions of the variables in the above steps into the formula, the following formula (3) can be obtained:
Figure BDA00027770001900001112
further, corresponding constraints can be defined for the mathematical model according to actual conditions:
since the computing power of each ES is limited, an edge server ES is definediHas a computing power upper limit of
Figure BDA00027770001900001113
Because of the edge server ES in a microgridiMay perform the tasks of multiple EDs within the microgrid, and therefore, these EDs may be scheduled for use in a subsequent microgrid operationThe sum of the computing resources of the processors to which the tasks are allocated cannot exceed
Figure BDA00027770001900001114
Namely:
Figure BDA0002777000190000121
similarly, the total computing power of the cloud end is
Figure BDA0002777000190000122
And the sum of the computing resources of the processors to which the tasks uploaded to the cloud are allocated cannot exceed pcmaxNamely:
Figure BDA0002777000190000123
due to the fact that
Figure BDA0002777000190000124
Representing a task wjThe percentage of the portion unloaded to the edge side execution, and thus can be obtained
Figure BDA0002777000190000125
The value range is as follows:
Figure BDA0002777000190000126
considering that task w is to be performedjDelay d for taski,jHas the maximum tolerance, so d can be obtainedi,jThe value range is as follows:
di,j≤Djformula (7)
Finally, referring to the previous steps, X in the matrix X can be obtainedi,jThe value range is as follows:
i∈Ixi,j1 formula (8)
Task delay model di,jIndicates a terminalEnd equipment EDiIn the cloud-edge collaborative architecture, at the same time, each ED may have tasks to be executed, and the total task delay of the tasks may be obtained by summing.
Defining the total task delay as dallThen, then
Figure BDA0002777000190000127
The optimization goal of this embodiment is to minimize this total task delay dallTherefore, the constraints defined by the formula (4) to the formula (8) are combined with the task delay model di,jThe final objective function can be obtained:
Figure BDA0002777000190000128
Figure BDA0002777000190000129
and so that the constraints defined by equation (4) -equation (8) are satisfied.
The obtained objective function J is converted into a convex optimization problem which is easy to solve, the optimization problem can be solved through a particle swarm optimization algorithm, and therefore feasibility of the optimization method aiming at the energy internet application scene and based on the cloud edge collaborative architecture so as to minimize time delay is verified.
The task scheduling method disclosed by the embodiment of the disclosure describes and models task scheduling and task time delay based on a cloud-edge cooperative architecture in an energy internet application scene, so as to design a cloud-edge cooperative architecture for the energy internet application scene, and give each execution time delay of the architecture in a form of a mathematical expression, which is concise and clear. The complex engineering problem is converted into a mathematical problem which is easy to solve, and the objective function is solved, so that the optimization method for minimizing the time delay is realized. According to the embodiment of the invention, the cloud side cooperative algorithm is executed to reasonably distribute the execution tasks at the cloud side and the edge side when the energy Internet runs, so that the computing capability is ensured, and the task execution time delay is minimized, thereby improving the overall performance of the system.
Fig. 7 illustrates an exemplary configuration of an energy router according to an embodiment of the present disclosure. The Energy Router (ER) is used for scheduling tasks in an energy Internet, the energy Internet comprises a cloud end and at least one microgrid, each microgrid comprises an energy router and at least one energy local area network, and each energy local area network comprises an Edge Server (ES) and a plurality of terminal devices (ED).
As shown in fig. 7, the energy router 700 may include an information acquisition unit 710 and a scenario formulation unit 720. The information obtaining unit 710 may be configured to obtain information about tasks to be performed by each terminal device within the piconet in which the energy router is located. The scheme formulating unit 720 may be configured to formulate a task scheduling scheme according to the acquired related information of the tasks to be executed to optimize the total task latency, where the task scheduling scheme of each task includes a division of a first task part and a second task part of the task, where the first task part is executed by the cloud and the second task part is executed by the corresponding single edge server.
The energy router is used for scheduling tasks in an energy internet and used as a task scheduling center of a microgrid in which the energy router is located, relevant information of tasks to be executed by each ED in the microgrid in which the energy router is located is obtained, and then a task scheduling scheme is formulated according to the obtained relevant information of the tasks to be executed, so that each task is divided into two parts, one part of the tasks is executed by a cloud end, and the other part of the tasks is executed by a corresponding single ES, and therefore total task delay is optimized. The energy router considers not only the unloading of all tasks to the edge side or the cloud side, but also the cloud-side cooperation, and formulates a task scheduling scheme according to the acquired relevant information of the tasks to be executed, so that part of each task can be unloaded to the cloud side for processing according to the scheduling scheme, and part of each task is unloaded to the edge side closer to the ED for processing. Therefore, the energy router of the embodiment is used for task scheduling, the overall computing efficiency of the energy internet is improved through the powerful and quick computing capability of the cloud, the consumed time of task computing is reduced, meanwhile, the transmission delay is reduced by using the characteristic of low delay at the edge side, the total task delay is further reduced, and the safety and the stability of the energy internet are guaranteed.
In some embodiments, the information about the task to be performed includes a data transmission amount and a task processing amount of the task to be performed. As shown in fig. 7, energy router 700 may further include a transmitting unit 730 configured to: and sending a signal comprising a task scheduling scheme to terminal equipment with tasks to be executed, wherein the total task delay is the sum of the task delays of the tasks to be executed, and the task delay of each task to be executed is the large value of the cloud-side task execution delay and the edge-side task execution delay.
In some embodiments, the task scheduling plan formulated by the plan formulating unit 720 includes a second task portion of the tasks being performed by an edge server within the piconet within which the energy router resides.
In some embodiments, the scenario formulation unit 720 is further configured to: and according to the acquired relevant information of the tasks to be executed, considering the working condition of each edge server and the distance between each terminal device to be executed and each edge server, and formulating a task scheduling scheme, wherein the task scheduling scheme further comprises the ratio of the first task part to the second task part and the corresponding single edge server.
In some embodiments, the scenario formulation unit 720 is further configured to: determining the sum of network transmission delay and task calculation delay of the edge side as the task execution delay of the edge side; determining the sum of the network transmission delay and the task calculation delay of the cloud side as the task execution delay of the cloud side; and formulating a task scheduling scheme through an optimization model according to the acquired relevant information of the tasks to be executed, wherein the optimization model takes the total task time delay as an objective function and the computing capacity of each edge server as a constraint condition.
In some embodiments, the optimization model further uses the computing power of the cloud and the tolerance of each task to the respective task delay as constraint conditions, the objective function includes a convex function, and the scheme making unit 720 solves the optimization model via a particle swarm optimization algorithm to make the task scheduling scheme.
In some embodiments, the energy internet comprises N piconets, and for one of the N piconets, X ═ X in the task scheduling schemei,jThe values are:
Figure BDA0002777000190000141
wherein, N is a positive integer, X is an I multiplied by J matrix, I and J respectively represent the set of the edge server and the terminal equipment under the microgrid, EDjRepresents the J terminal equipment under the microgrid, J belongs to J, ESiThe ith edge server under the microgrid is represented, I belongs to I, and the task of each terminal device can be only unloaded to one edge server for execution, namely sigmai∈I xi,j=1。
In fig. 7, the functional blocks (the information acquisition unit 710, the scenario formulation unit 720, and the transmission unit 730) described as performing various processes may be configured to include circuit blocks, memories, and the like in terms of hardware, and to be implemented by programs and the like loaded into the memories (storage media) in terms of software. Accordingly, those skilled in the art will appreciate that these functional blocks may be implemented in various forms of hardware, software, or a combination thereof, and that the present invention is not limited thereto.
Since the functions of the respective units in the energy router 700 correspond to the respective steps in the above-described task scheduling method embodiments, the functions and operations of the respective units can be understood with reference to the respective steps in the respective method embodiments, and are not described in detail here to avoid redundancy.
Fig. 8 is a schematic structural diagram of an energy router according to another embodiment of the present disclosure. As shown in fig. 8, the energy router 800 includes a processor 810, a memory 820, a solid state transformer 830, an energy storage battery 840, and an interface 850.
Processor 810 may be a processing device including more than one general purpose processing device such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), etc. More specifically, processor 810 may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. Processor 810 may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like. The processor 810 may be communicatively coupled to the memory 820 and configured to execute computer-executable instructions stored thereon to perform the task scheduling method in the energy internet of the above-described embodiments.
The memory 820 may be a non-transitory computer-readable medium such as Read Only Memory (ROM), Random Access Memory (RAM), phase change random access memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), other types of Random Access Memory (RAM), flash disk or other forms of flash memory, cache, registers, static memory, compact disk read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes or other magnetic storage devices, or any other possible non-transitory medium that can be used to store information or instructions that can be accessed by a computer device, and so forth.
The core of the energy router 800 is a solid-state transformer 830, which is a main physical device for controlling electric energy, and is an electric device that combines a power electronic conversion technology and an electric energy exchange technology based on an electromagnetic induction principle to convert electric energy with one electric characteristic into electric energy with another electric characteristic, where the electric characteristics include: the amplitude, phase, frequency, number of phases, and waveform of the voltage (or current), etc. The solid state transformer 830 may include an AC/DC rectifier, a DC/AC converter, a high frequency transformer, an AC/DC converter, a low voltage level DC bus shunt module, and a DC/AC inverter. The AC/DC rectifier is connected with the DC/AC converter through a high-voltage direct-current bus, the DC/AC converter, the high-frequency transformer and the AC/DC converter are sequentially connected, the AC/DC converter is connected with the low-voltage direct-current bus parallel module through a low-voltage direct-current bus, and the low-voltage direct-current bus parallel module is connected with the DC/AC inverter.
The energy storage battery 840 is an electric energy storage module of the energy router 800, and provides electric energy quality control compensation or provides active power when the power supply of the microgrid system fails, and also performs a power balance function of the power system. The energy storage battery 840 may be electrically connected to the solid state transformer 830 via a DC/AC converter.
The interface 850 includes a plurality of interfaces such as a power interface, a communication interface, and a water and electricity interface.
The embodiment of the present disclosure further provides a microgrid system, which includes an energy router according to any one of the above embodiments and at least one energy source local area network.
The disclosed embodiments also provide a non-transitory computer readable medium storing instructions that, when executed by the processor 810, perform a task scheduling method according to any of the above embodiments.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the disclosure with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the specification or during the prosecution of the disclosure, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be utilized by those of ordinary skill in the art upon reading the foregoing description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.

Claims (18)

1. A task scheduling method in an energy Internet, wherein the energy Internet comprises a cloud and at least one microgrid, each microgrid comprises an energy router and at least one energy local area network, each energy local area network comprises an edge server and a plurality of terminal devices, and the task scheduling method comprises the following steps:
the energy router acquires relevant information of tasks to be executed by each terminal device in the micro-network where the energy router is located;
and the energy router formulates a task scheduling scheme according to the acquired related information of the tasks to be executed so as to optimize the total task delay, wherein the task scheduling scheme of each task comprises the division of a first task part and a second task part of the task, the first task part is executed by the cloud end, and the second task part is executed by a corresponding single edge server.
2. The task scheduling method according to claim 1, wherein the information related to the task to be executed includes a data transmission amount and a task processing amount of the task to be executed, and the task scheduling method further comprises: and parallelly performing a first process of unloading the first task part to the cloud end and executing the first task part and a second process of unloading the second task part to the corresponding single edge server and executing the second task part, wherein the total task delay is the sum of the task delays of the tasks to be executed, and the task delay of each task to be executed is a large value of the cloud-side task execution delay and the edge-side task execution delay.
3. The task scheduling method according to claim 2, wherein the corresponding single edge server is an edge server within a piconet in which the energy router is located.
4. The task scheduling method according to claim 2, wherein the formulating a task scheduling scheme according to the obtained information about the task to be executed to optimize the total task delay further comprises:
and according to the acquired relevant information of the tasks to be executed, considering the working condition of each edge server and the distance between each terminal device of the tasks to be executed and each edge server, formulating the task scheduling scheme, wherein the task scheduling scheme further comprises the occupation ratio of the first task part and the second task part and the corresponding single edge server.
5. The task scheduling method according to claim 4, wherein the formulating the task scheduling scheme according to the obtained relevant information of the task to be executed, taking into account the working conditions of the edge servers and the distances between the edge servers and the terminal devices to execute the task, further comprises:
determining the sum of network transmission delay and task calculation delay of an edge side as the task execution delay of the edge side;
determining the sum of network transmission delay and task calculation delay of a cloud side as the task execution delay of the cloud side;
and formulating the task scheduling scheme through an optimization model according to the acquired relevant information of the tasks to be executed, wherein the optimization model takes the total task time delay as an objective function and the computing capacity of each edge server as a constraint condition.
6. The task scheduling method according to claim 5, wherein the optimization model further uses the computing power of the cloud and tolerance of each task to its respective task delay as constraint conditions, the objective function includes a convex function, and the optimization model is solved through a particle swarm optimization algorithm to formulate the task scheduling scheme.
7. The task scheduling method of claim 5, wherein the energy internet comprises N piconets, and wherein for each of the N piconetsA microgrid, wherein X in the task scheduling scheme is { X ═ Xi,jThe values are:
Figure FDA0002777000180000021
wherein, N is a positive integer, X is an I multiplied by J matrix, I and J respectively represent the set of the edge server and the terminal equipment under the microgrid, EDjRepresents the J terminal equipment under the microgrid, J belongs to J, ESiThe ith edge server under the microgrid is shown, I belongs to I, and the task of each terminal device can be only unloaded to one edge server for execution, namely sigmai∈Ixi,j=1。
8. Task scheduling method according to claim 7, characterized in that wj={tj,cjAnd the edge side task execution time delay
Figure FDA0002777000180000022
Expressed as formula (1):
Figure FDA0002777000180000023
the cloud-side task execution delay
Figure FDA0002777000180000031
Expressed as formula (2):
Figure FDA0002777000180000032
wherein, tjIndicating said task w to be performedjAmount of data transmission of cjIndicating said task w to be performedjTask throughput of ri,jIndicating said terminal device EDjWith the edge server ESiLength of channel between c denotes netTransmission rate of the network channel, BjWhich represents the bandwidth of the channel and,
Figure FDA0002777000180000033
represents the network transmission delay of the edge side,
Figure FDA0002777000180000034
representing the edge server ESiAmount of tasks to be performed, pi,jRepresenting the edge server ESiIs assigned to the task w to be performedjThe execution rate of (a) is determined,
Figure FDA0002777000180000035
representing the task computation time delay of the edge side, SjIndicating said terminal device EDjA network channel transmission rate to the cloud,
Figure FDA0002777000180000036
represents the network transmission delay of the cloud side,
Figure FDA0002777000180000037
the processor representing the cloud allocates the task w to be executedjThe execution rate of (a) is determined,
Figure FDA0002777000180000038
and representing the task computing time delay of the cloud side.
9. The task scheduling method according to claim 8, wherein the task w to be executedjTask delay di,jExpressed as formula (3):
Figure FDA0002777000180000039
the objective function is:
Figure FDA0002777000180000041
so as to satisfy
Figure FDA0002777000180000042
Figure FDA0002777000180000043
Figure FDA0002777000180000044
di,j≤Dj
i∈Ixi,j=1
Wherein the content of the first and second substances,
Figure FDA0002777000180000045
representing the edge server ESiUpper limit of computing power of pcmaxRepresenting an upper limit of the computing power of the cloud,
Figure FDA0002777000180000046
indicating said task w to be performedjBy the edge server ESiProportion of the second task part of execution, DjIndicating said task w to be performedjDelaying the task by di,jMaximum tolerance of.
10. An energy router for scheduling tasks in an energy internet, the energy internet comprising a cloud and at least one microgrid, each microgrid comprising the energy router and at least one energy local area network, each energy local area network comprising an edge server and a plurality of terminal devices, the energy router comprising:
an information acquisition unit configured to acquire information on tasks to be executed by each terminal device within a piconet in which the energy router is located;
the system comprises a scheme making unit and a task scheduling unit, wherein the scheme making unit is configured to make a task scheduling scheme according to the acquired related information of the tasks to be executed so as to optimize the total task delay, the task scheduling scheme of each task comprises the division of a first task part and a second task part of the task, the first task part is executed by the cloud end, and the second task part is executed by a corresponding single edge server.
11. The energy router according to claim 10, wherein the information related to the task to be performed comprises a data transmission amount and a task processing amount of the task to be performed, and the energy router further comprises a sending unit configured to: and sending a signal comprising a task scheduling scheme to a terminal device with a task to be executed, wherein the total task delay is the sum of the task delays of the tasks to be executed, and the task delay of each task to be executed is the large value of the cloud-side task execution delay and the edge-side task execution delay.
12. The energy router of claim 11, wherein the task scheduling plan formulated by the plan formulation unit includes a second task portion of the tasks being performed by an edge server within a piconet in which the energy router is located.
13. The energy router of claim 11, wherein the solution formulating unit is further configured to:
and according to the acquired relevant information of the tasks to be executed, considering the working condition of each edge server and the distance between each terminal device of the tasks to be executed and each edge server, formulating the task scheduling scheme, wherein the task scheduling scheme further comprises the occupation ratio of the first task part and the second task part and the corresponding single edge server.
14. The energy router of claim 13, wherein the solution formulating unit is further configured to:
determining the sum of network transmission delay and task calculation delay of an edge side as the task execution delay of the edge side;
determining the sum of network transmission delay and task calculation delay of a cloud side as the task execution delay of the cloud side;
and formulating the task scheduling scheme through an optimization model according to the acquired relevant information of the tasks to be executed, wherein the optimization model takes the total task time delay as an objective function and the computing capacity of each edge server as a constraint condition.
15. The energy router of claim 14, wherein the optimization model further uses the computing power of the cloud and tolerance of each task to its respective task delay as constraints, the objective function comprises a convex function, and the scheme making unit solves the optimization model via a particle swarm optimization algorithm to make the task scheduling scheme.
16. The energy router of claim 14, wherein the energy internet comprises N piconets, and wherein for one of the N piconets, X ═ X in the task scheduling schemei,jThe values are:
Figure FDA0002777000180000061
wherein, N is a positive integer, X is an I multiplied by J matrix, I and J respectively represent the set of the edge server and the terminal equipment under the microgrid, EDjRepresents the J terminal equipment under the microgrid, J belongs to J, ESiThe ith edge server under the microgrid is shown, I belongs to I, and the task of each terminal device can be only unloaded to one edge server for execution, namely sigmai∈Ixi,j=1。
17. A microgrid system comprising an energy router according to any of claims 10 to 16 and at least one energy local area network.
18. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by a processor, perform a task scheduling method according to any one of claims 1 to 9.
CN202011268951.9A 2020-11-13 2020-11-13 Task scheduling method, energy router, microgrid system and storage medium Pending CN114489962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011268951.9A CN114489962A (en) 2020-11-13 2020-11-13 Task scheduling method, energy router, microgrid system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011268951.9A CN114489962A (en) 2020-11-13 2020-11-13 Task scheduling method, energy router, microgrid system and storage medium

Publications (1)

Publication Number Publication Date
CN114489962A true CN114489962A (en) 2022-05-13

Family

ID=81490040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011268951.9A Pending CN114489962A (en) 2020-11-13 2020-11-13 Task scheduling method, energy router, microgrid system and storage medium

Country Status (1)

Country Link
CN (1) CN114489962A (en)

Similar Documents

Publication Publication Date Title
Li et al. Resource orchestration of cloud-edge–based smart grid fault detection
CN111445111B (en) Electric power Internet of things task allocation method based on edge cooperation
CN107947164B (en) Electric power system day-ahead robust scheduling method considering multiple uncertainties and correlations
CN107977744B (en) Day-ahead robust scheduling method of power system based on traditional Benders decomposition method
CN107706921B (en) Micro-grid voltage regulation method and device based on Nash game
CN111556089A (en) Resource joint optimization method based on enabling block chain mobile edge computing system
CN110661258B (en) Flexible resource distributed robust optimization method for power system
CN111404185B (en) Charging system control method, controller and system
CN112803434A (en) Reactive power optimization method, device, equipment and storage medium for active power distribution network
CN113852135A (en) Virtual power plant energy scheduling method, device, storage medium and platform
Zhang et al. A random forest-assisted fast distributed auction-based algorithm for hierarchical coordinated power control in a large-scale PV power plant
Mishra et al. Enabling cyber‐physical demand response in smart grids via conjoint communication and controller design
CN114489962A (en) Task scheduling method, energy router, microgrid system and storage medium
CN109615680B (en) Method, device and storage medium for realizing wireless spectrum resource spatial distribution interpolation processing based on Thiessen polygons and distance inverse proportion
CN116347522A (en) Task unloading method and device based on approximate computation multiplexing under cloud edge cooperation
CN111158893B (en) Task unloading method, system, equipment and medium applied to fog computing network
CN116777234A (en) Method and system for constructing two-stage robust optimization model and formulating strategy
CN116502832A (en) Multi-micro-grid joint planning method, system, storage medium and electronic equipment
Yin et al. A decentralized power dispatch strategy in an electric vehicle charging station
CN115391962A (en) Communication base station and power distribution network collaborative planning method, device, equipment and medium
CN110768294B (en) Random scheduling method and device for distributed power supply
CN114418232A (en) Energy storage system operation optimization method and system, server and storage medium
CN113572158A (en) Hydrogen production control method and application device thereof
CN107608237B (en) Hardware resource optimization control method based on photovoltaic system semi-physical simulation
CN112491067A (en) Active power distribution network capacity configuration method based on composite energy storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination