CN116634388A - Electric power fusion network-oriented big data edge caching and resource scheduling method and system - Google Patents
Electric power fusion network-oriented big data edge caching and resource scheduling method and system Download PDFInfo
- Publication number
- CN116634388A CN116634388A CN202310922352.1A CN202310922352A CN116634388A CN 116634388 A CN116634388 A CN 116634388A CN 202310922352 A CN202310922352 A CN 202310922352A CN 116634388 A CN116634388 A CN 116634388A
- Authority
- CN
- China
- Prior art keywords
- power grid
- fog node
- power
- node
- fog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000004927 fusion Effects 0.000 title claims abstract description 44
- 230000005540 biological transmission Effects 0.000 claims abstract description 94
- 238000005457 optimization Methods 0.000 claims abstract description 90
- 238000004891 communication Methods 0.000 claims abstract description 29
- 238000013468 resource allocation Methods 0.000 claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 15
- 230000001133 acceleration Effects 0.000 claims description 14
- 230000002787 reinforcement Effects 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 9
- 230000003139 buffering effect Effects 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims 1
- 230000007547 defect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 230000009471 action Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 239000003595 mist Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000013016 damping Methods 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/10—Flow control between communication endpoints
- H04W28/14—Flow control between communication endpoints using intermediate storage
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention provides a large data edge caching and resource scheduling method and system for an electric power fusion network, and relates to the technical field of electric power wireless communication. The method comprises the following steps: acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node; based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the time delay model with minimized time delay as an optimization target; decomposing the optimization targets into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the optimization sub-targets and constraint conditions. The method and the system solve the unreasonable defect of power grid resource allocation during power grid data content transmission in the prior art, realize effective information sharing between the power grid terminal and the fog node, rapidly allocate the power grid resource on the premise of meeting time delay constraint, and achieve better transmission effect.
Description
Technical Field
The invention relates to the power wireless communication technology, in particular to a power fusion network-oriented big data edge caching and resource scheduling method and system.
Background
With the diversified development of the application and service of the novel power system, the requirements on the structural complexity, throughput and real-time performance of the system transmission data are higher and higher, so that the existing system faces the challenges of poor service adaptation flexibility, tension of network resources and high energy consumption when carrying big data. Against this background, a fog radio access network (Fog Radio Access Network, F-RAN) architecture based on the concept of "fog calculation" is proposed in the prior art. The F-RAN can effectively support the collaborative fusion of communication, caching and computing by utilizing the technologies of cloud computing, edge caching and the like, so that the efficient distribution of the three resources of the communication, the caching and the computing is realized, and the service requirements of different users and businesses are met.
However, there are few studies on caching strategies and resource allocation methods for large data traffic of new power systems. The existing algorithm generally obtains the content cache position and the size distribution proportion based on the content popularity, and reduces the energy consumption of data transmission. However, in practical grid business applications, content popularity is difficult to accurately learn of real-time changes, resulting in unreasonable distribution of grid resources during content transmission. In addition, the existing algorithm has large computational complexity and is difficult to apply to a large-scale smart grid application scene.
Disclosure of Invention
The invention provides a large data edge caching and resource scheduling method and system for an electric power fusion network, which are used for solving the unreasonable defect of power grid resource allocation during power grid data content transmission in the prior art, realizing effective information sharing between a power grid terminal and a fog node, and rapidly allocating power grid resources on the premise of meeting time delay constraint so as to achieve better transmission effect.
The invention provides a big data edge caching and resource scheduling method for an electric power fusion network, which comprises the following steps:
acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the time delay model with minimized time delay as an optimization target;
decomposing the optimization targets into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the optimization sub-targets and constraint conditions.
According to the method for caching big data edges and scheduling resources of the power fusion network provided by the invention, the acquiring of the association relation between each power grid terminal and each fog node, the cooperation relation between each fog node and the power grid data caching strategy of each fog node comprises the following steps:
Acquiring a communication rule of a resource distribution system, and acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node based on the communication rule; wherein the communication rule includes:
the power grid terminal selects an associated fog node with an association relationship from the fog nodes based on a request of target power grid data;
under the condition that the associated cloud node caches the target power grid data, the power grid terminal communicates with the associated cloud node;
and under the condition that the associated cloud node does not cache the target power grid data, the power grid terminal is communicated with a cooperative cloud node with a cooperative relationship with the associated cloud node or is communicated with a power grid data management center through a return link.
According to the method for buffering and scheduling the big data edge of the power fusion network, which is provided by the invention, the time delay model comprises the following steps:
and the transmission delay between the power grid terminal and the associated fog node, the transmission delay between the associated fog node and the cooperative fog node and the transmission delay between the associated fog node and the power grid data management center are all the same.
According to the method for buffering big data edges and scheduling resources of the power fusion network, provided by the invention, the optimization sub-objective comprises the following steps:
fixing fog node transmitting power, the power grid data caching strategy and the cooperative relationship, and optimizing sub-targets of the association relationship;
fixing the fog node transmitting power and the association relation, and optimizing the power grid data caching strategy and the optimization sub-objective of the cooperation relation;
and fixing the power grid data caching strategy, the association relation and the cooperation relation, and optimizing the optimization sub-objective of the fog node transmitting power.
According to the power fusion network big data edge caching and resource scheduling method provided by the invention, aiming at the power grid data caching strategy and the optimization sub-targets of the cooperative relationship, the power grid resource allocation result is determined based on the plurality of optimization sub-targets and constraint conditions, and the method comprises the following steps:
aiming at each fog node with an association relation with the power grid terminal, independently training the request data in the service range of each fog node by adopting a reinforcement learning algorithm, and determining an optimal solution of a power grid data caching strategy of each fog node;
And aiming at each fog node with an association relation with the power grid terminal, determining the fog node which is cached with the same content as the target power grid data caching strategy as a target cooperation fog node based on the target power grid data caching strategy of each fog node, and determining the relation between the target cooperation fog node and each fog node as a cooperation relation optimal solution.
According to the power fusion network big data edge caching and resource scheduling method provided by the invention, aiming at the optimization sub-targets of the fog node transmission power, the power grid resource allocation result is determined based on the optimization sub-targets and constraint conditions, and the method comprises the following steps:
and solving an optimization sub-target of the fog node transmission power by adopting an acceleration algorithm based on Nesterov momentum, and determining an optimal solution of the fog node transmission power by adjusting the direction of each iteration to enable the first-order algorithm to quickly converge.
According to the power fusion network-oriented big data edge caching and resource scheduling method provided by the invention, the constraint conditions comprise:
fog node storage capacity constraint, fog node transmission power constraint, return link capacity constraint, power grid data caching strategy constraint, association relation value constraint and cooperation relation value constraint.
The invention also provides a big data edge cache and resource scheduling system for the power fusion network, which comprises the following steps: the system comprises a power grid data management center, a plurality of fog nodes and a plurality of power grid terminals;
the power grid data management center is used for acquiring the association relation between each power grid terminal and each fog node, the cooperation relation between each fog node and the power grid data caching strategy of each fog node;
the method is also used for establishing a time delay model of the power grid data transmission task based on the association relation, the cooperation relation and the power grid data caching strategy, and taking the minimized time delay model as an optimization target;
and the optimization target is decomposed into a plurality of optimization sub-targets, and the power grid resource allocation result is determined based on the plurality of optimization sub-targets and the constraint condition.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the power fusion network-oriented big data edge caching and resource scheduling method when executing the program.
The invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the power fusion network-oriented big data edge caching and resource scheduling method as described in any one of the above.
The invention also provides a computer program product, which comprises a computer program, wherein the computer program realizes the power fusion network-oriented big data edge caching and resource scheduling method when being executed by a processor.
According to the power fusion network-oriented big data edge caching and resource scheduling method and system, the power grid data is cached in advance by utilizing the edge caching technology, so that the access pressure and the calculation cost of the power grid data management center are shared, and the transmission delay and the power consumption of the power grid business data are further reduced. The optimization problem with the aim of minimizing the data transmission delay is established, the cache strategy and the communication resource are respectively optimized by disassembling the optimization sub-targets, and the effective information sharing among the intelligent power grid terminals and the quick control instruction issuing meeting the delay constraint are realized.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a power fusion network-oriented big data edge caching and resource scheduling method provided by the invention;
FIG. 2 is a schematic diagram of a distributed reinforcement learning optimization caching strategy provided by the invention;
FIG. 3 is a schematic diagram of a power fusion network-oriented big data edge cache and resource scheduling system provided by the invention;
fig. 4 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The method is applied to a resource distribution system, particularly a video monitoring system for a smart grid, and the system comprises a grid data management center (a cloud platform for storing all video files), a plurality of fog nodes (edge clouds with video processing and storage functions) and a plurality of grid terminals (cameras); as shown in fig. 1, the method according to the embodiment of the present invention at least includes the following steps:
Step 101, acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
102, establishing a time delay model of a power grid data transmission task based on an association relationship, a cooperation relationship and a power grid data caching strategy, and taking the minimized time delay model as an optimization target;
and 103, decomposing the optimization target into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the plurality of optimization sub-targets and the constraint condition.
For step 101, it should be noted that the resource allocation system according to the embodiment of the present invention includes a power grid data management center (a central cloud or a cloud computing center), a plurality of access fog nodes with edge caches (edge clouds), and various smart grid terminals. The power grid data caching strategy refers to whether the fog node caches target power grid data requested by a certain terminal.
It is assumed that the edge cache based foggy radio access network system comprisesMist node with edge buffer capability, < >>The individual network terminals and a network data management center, the fog node and the network terminal set are respectively made of +.>Andrepresented by the formula. In addition, assume that the cloud computing center stores all service contents that the terminal can request, denoted by the set +. >Wherein, business file->The data size of (2) is +.>Each service file stores corresponding grid data. The content cached by the cloud node is transmitted from the cloud computing center to the cloud computing center through a capacity-limited return link, and the edge cache capacity of each cloud node is as followsMbit。
In particular, the invention definesCaching policy set for grid data of all contents at fog node,>for the set of associations between all terminals-fog nodes, < > for>Is a set of collaboration relationships among all mist nodes.
For step 102, it should be noted that, in the embodiment of the present invention, according to the data transmission paths among the power grid terminal, the fog node and the power grid data management center in the resource allocation system, a delay model is established, and a target of minimizing the transmission delay of the content data of the power grid terminal is established, and meanwhile, constraint conditions such as the storage capacity of the fog node, the transmission power of the fog node, the capacity of a return link, the cache, the association and cooperation relationship and the like are satisfied, and the optimization problem of multivariable coupling using the fog node power grid data cache policy, the terminal-fog node association relationship, the fog node cooperation relationship and the AP transmission power as variables is solved.
It should be noted that, in step 103, because of the high-dimensional discrete variable and the complex variable coupling relationship, the modeling optimization problem is non-convex discontinuous and difficult to directly solve. It is very difficult to find even a viable point that meets these non-convex constraints simultaneously. In addition, in the fusion network facing the novel power system, the number of terminals and fog nodes is very large, and the traditional optimization algorithm, such as an interior point method, has the computational complexity which is the third power of the network dimension, so that the computational complexity is very high. Therefore, how to solve this non-convex and nonlinear problem with low computational complexity is the key to construct the resource allocation scheme.
In order to solve the optimization problem efficiently. The embodiment of the invention decomposes the original optimization problem of the multi-task coupling into 3 independent optimization sub-problems, and obtains the optimal solution of the problem by alternately iterating and optimizing the 3 sub-problems.
The power fusion network big data edge caching and resource scheduling method provided by the embodiment of the invention can be used for aiming at a novel power system, integrating the power service of the network big data and distributing a novel power grid data caching strategy and novel power grid resources. And based on the efficient data transmission technology assisted by the node collaborative cache, a data transmission delay model is established, and an optimization problem aiming at minimizing the data transmission delay is further established. And finally, resolving the optimization sub-objective. And finally, effective information sharing among the power grid terminals and quick control instruction issuing meeting time delay constraint are realized, and further, the enabling of the non-cellular distributed network to the fusion network networking of the novel power system is completed.
In some embodiments, obtaining an association relationship between each power grid terminal and each fog node, a collaboration relationship between each fog node, and a power grid data caching policy of each fog node includes:
acquiring a communication rule of a resource distribution system, and acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node based on the communication rule; the communication rule in this embodiment includes:
The power grid terminal selects an associated fog node with an association relationship from the fog nodes based on a request of target power grid data;
under the condition that the associated fog node caches the target power grid data, the power grid terminal communicates with the associated fog node;
and under the condition that the associated fog node does not cache the target power grid data, the power grid terminal is communicated with the cooperative fog node with the associated fog node in cooperative relation or is communicated with a power grid data management center through a return link.
It should be noted that the target power grid data includes massive shared data of a fusion network of the novel power system, such as a high-definition map of a power distribution station, a unified instruction of a control center, and the like. The embodiment of the invention provides a communication rule for efficient data transmission based on collaborative caching assistance, assuming that each power grid terminal provides a communication rule for efficient data transmission based on the required request content. The method comprises the following steps: the terminal in the power grid is preferentially associated with the adjacent fog nodes and the cooperative fog nodes, if the adjacent fog nodes cache the requested content, the terminal can be directly provided with service, if the adjacent fog nodes do not cache the requested content, the cooperative fog nodes need to further check whether the cooperative fog nodes cache the requested content, if the cooperative fog nodes cache the requested content, the cached content is firstly transmitted to the adjacent fog nodes, and then the adjacent fog nodes provide service for the terminal. And if the request content is cached by both the adjacent fog node and the cooperative fog node, directly performing service, otherwise, communicating with a power grid data management center.
The embodiment of the invention is oriented to the converged network big data service of a novel power system, and a cache-assisted fog wireless access network is established in an abstract way. Assume that a system includes a plurality of foggy nodes with edge buffering capabilityMultiple grid terminals->And a cloud computing center. In addition, it is also assumed that the cloud computing center (grid data management center) stores all service contents that the terminal can request expressed as +.>Wherein, the data size of the business file is as follows. The content cached by the cloud node is transmitted from the cloud computing center to the cloud computing center through a capacity-limited return link, and the edge cache capacity of each cloud node is as followsMbit。
When each content transmission starts, each terminal device generates probability random request content according to ziff distribution, and selects corresponding fog nodes for association, if the associated fog nodes cache the request content, the associated fog nodes can be directly served by the associated fog nodes. On the contrary, there are two options: 1) The terminal equipment acquires the requested specific content from the cloud computing center, namely the requested content is firstly transmitted to the associated cloud node from the cloud computing center and then further transmitted to the terminal equipment; 2) The slave obtains the content from the other fog nodes with which it cooperates, i.e. the associated fog node, checks whether the other fog nodes with which it cooperates have cached the specific content. If yes, the content cached by the cooperative cloud nodes is firstly transmitted to the relevant cloud nodes, and the content is used as terminal cooperative service.
An embodiment of the method for edge caching and resource scheduling of big data for power fusion network of the present invention is provided below, and the embodiment is defined as followsFor a set of caching policies for all content at the fog node,for a set of associations between all terminal-cloud nodes,is a set of collaboration relationships among all mist nodes.
Firstly, in the first case, defining all terminal devices to access the associated fog nodes by adopting an orthogonal frequency division multiple access mode, if the fog nodes associated with the terminal devices areBuffer memory of terminal device->Requested service content->Then directly from fog node->Transmitting the requested content to the terminal device +.>The transmission rate is shown in formula 1, and can be expressed as:
1 (1)
Wherein,,for bandwidth, & gt>At the access fog node->Terminal equipment number, & gt>And->Respectively corresponding channel gain and transmit power, < ->Is the noise variance. Definitions->The power set is sent for all the foggy nodes. Accordingly, the content transmission delay is shown in formula 2, and can be expressed as:
2, 2
Second, in the second case, if a foggy nodeNo buffer terminal device +.>Requested content->Mist node->Fog node requiring further review of its collaboration +.>Whether or not to cache content- >. Such asIf fog node->With cached content->Mist node->First the cache content +.>Transmission to foggy node->The transmission delay is shown in formula 3, and can be expressed as:
3
Wherein,,is the transmission rate between two adjacent fog nodes.
Again, in the third case, through the foggy nodeThe requested service content->Send to terminal device->If fog node->Nor cache content +.>Terminal device->Mist node which must be associated by means of it ∈ ->A request is made to a cloud computing center. The request process specifically comprises the following steps: content is first of all +.>Transmission to foggy node->From fog node->To terminal device->. Will request content +.>Transmission to foggy node->The time delay of (2) is shown in equation 4 and can be expressed as:
4. The method is to
Wherein,,for cloud computing center and fog node->Transmission rate of the backhaul link therebetween.
Combining the 3 cases discussed above, a calculated slave fog node can be obtainedSend content->To terminal device->The time required is shown in equation 5 and can be expressed as:
5. The method is to
Accordingly, the content transmission time in one downlink content transmission period is modeled as shown in equation 6, and can be expressed as:
6. The method is to
After analyzing the transmission delay resulting from the completion of the content transmission, it can be observed that: the transmission delay is mainly determined by a caching strategy, a terminal equipment-fog node association relation, a fog node cooperation relation and fog node transmission power. According to the power fusion network-oriented big data edge caching and resource scheduling method, effective information sharing among power grids and quick control instruction issuing meeting time delay constraint are achieved through designing an efficient caching strategy, terminal equipment-fog node association, cooperation among fog nodes and sending power distribution of the fog nodes, and enabling of a non-cellular distributed network to a novel power system wireless fusion network is further completed.
In some embodiments, the delay model includes: the transmission delay between the power grid terminal and the associated fog node, the transmission delay between the associated fog node and the cooperative fog node and the transmission delay between the associated fog node and the power grid data management center.
According to the embodiments corresponding to the above formulas 1-6, it can be seen that the content transmission delay mainly includes three parts, namely, the transmission delay between the power grid terminal and the associated fog node, the transmission delay between the associated fog node and the cooperative fog node, and the transmission delay between the associated fog node and the power grid data management center. In addition, according to the transmission delay models of three different conditions, the transmission delay is closely related to the transmission power of the fog nodes, the caching strategy of the fog nodes, the association between the fog nodes and the terminals and the cooperation relationship among the fog nodes, and the reduction of the transmission delay can be realized by optimizing the variables.
It should be noted that, assuming that all grid terminals access their associated fog nodes by adopting an orthogonal frequency division multiple access manner, if the fog node associated with the grid terminal caches the service content requested by the terminal, the requested content is directly sent from the fog node terminal to the terminal. For this case, the transmission delay thereof includes only the transmission delay of the foggy node to the terminal.
On the other hand, if the foggy node does not cache the content requested by the terminal, the foggy node needs to further check whether the content is cached on the foggy node with which it cooperates. If the fog node has the cached content, the fog node first transmits the cached content to the fog node. And then the requested service content is sent to the terminal through the fog node. For this case, in addition to the transmission delay from the foggy node to the terminal, the transmission of the buffered content between the cooperating foggy nodes will also create additional delay.
If the cloud node does not cache the content either, the terminal must make a request to the cloud computing center through its associated cloud node. The content is transmitted to the cloud computing center through the return link, and then the content is transmitted to the terminal from the cloud computing center, wherein the delay comprises the delay from the cloud computing center to the terminal and the delay from the cloud computing center to the cloud node.
Specifically, in the embodiment of the invention, the transmission delay is mainly determined by a buffer strategyTerminal-fog node association>Fog node collaboration relationship->And fog node transmit power->And (3) determining.
In some embodiments, the constraints include:
fog node storage capacity constraint, fog node transmission power constraint, return link capacity constraint, power grid data caching strategy constraint, association relation value constraint and cooperation relation value constraint.
Embodiments of the present invention are established to minimize content transmission delayFor optimization purposes, the strategy is cached with fog nodes +.>Terminal-fog node association>Fog node collaboration relationship->And fog node transmit power->As a variable optimization problem, as shown in equation 7:
7. The method of the invention
Wherein constraint condition C1 represents the buffer capacity of each fog node, constraint condition C2 is the transmission power limit of the fog node, constraint condition C3 is the backhaul link limit between each fog node and the cloud center, and constraint condition C4 reflects the fog nodeCache state of->Representing cached business contentcAnd otherwise, no cache exists. Constraint condition C5 is the association between the terminal and the fog node,>indicating terminal->And fog node->There is an association relationship, otherwise, the association is not performed. Constraint C6 represents the collaborative cache relationship existing between foggy nodes,>description fog node->And->And if not, the two fog nodes do not perform collaborative caching.
In some embodiments, optimizing the sub-objective includes:
fixing the fog node transmitting power, the power grid data caching strategy and the cooperative relationship, and optimizing sub-targets of the association relationship;
fixing the power and association relation of the fog node, and optimizing a power grid data caching strategy and an optimization sub-objective of the cooperation relation;
And (3) fixing a power grid data caching strategy, an association relation and a cooperation relation, and optimizing an optimization sub-target of the fog node transmission power.
It should be noted that, in the embodiment of the present invention, the optimization problem is first split into 3 sub-problems: the first sub-problem is fixed transmit powerCache policy->And collaboration relationship->Optimizing the association relation of terminal-fog node +.>The method comprises the steps of carrying out a first treatment on the surface of the The second sub-problem is fixed transmit power +.>And terminal-fog node association->Optimizing cache policy->And collaboration relationship->The method comprises the steps of carrying out a first treatment on the surface of the The third sub-problem is to fix the cache policy +.>Association->Cooperative relationship->Optimizing transmit power +.>. And (3) optimizing 3 sub-problems through alternating iteration, so as to obtain an optimal solution of the problem.
Specifically, the association relation of the terminal and the fog nodeIn the embodiment of the invention, the Euclidean distance between the terminal (camera) and the fog node (edge cloud) is used as a judgment standard, and the fog node closest to the terminal is selected as the associated node. Specifically, at the beginning of content transmission, first, the Euclidean distance between each terminal and the foggy node is calculated, and the distance vector +.>Then according to the distance vector obtained +. >For terminal->The distances from each fog node are ordered in descending order, and the final terminal is +.>The closest fog node is selected as its associated fog node.
In some embodiments, for an optimization sub-objective of a grid data caching strategy and a collaboration relationship, determining a grid resource allocation result based on the optimization sub-objective and a constraint condition includes:
aiming at each fog node with an association relation with a power grid terminal, independently training the request data in each fog node and a service range by adopting a reinforcement learning algorithm, and determining an optimal solution of a power grid data caching strategy of each fog node;
and determining the fog node which has cached the same content as the target power grid data caching strategy as a target cooperative fog node based on the target power grid data caching strategy of each fog node aiming at each fog node which has an association relation with the power grid terminal, and determining the relation between the target cooperative fog node and each fog node as a cooperative relation optimal solution.
It should be noted that, aiming at the cache policy problemDistributed reinforcement learning is proposed to optimize collaborative caching policies at the APs, with each cloud node being responsible for data training within the region and updating model parameters to the cloud computing center. The specific algorithm flow is shown in fig. 2:
Each fog node is trained independently using a reinforcement learning algorithm and the requested data within its service range. And the cloud computing center merges the update parameters of each fog node to update the global model, and further sends the updated global model parameters back to each fog node. After several iterations, the final caching strategy may be obtained. For the cooperative relationshipCan then be passed through the determined caching strategy +.>To judge that the fog node which is cached with the same content is determined as oneAnd a fog node cluster with a cooperative relationship.
In some embodiments, distributed reinforcement learning includes the steps of:
1) Model initialization, wherein the cloud computing center sends initialization weight parameters to all the participating fog nodes;
2) Each fog node locally trains own model parameters through deep reinforcement learning.
It should be noted that the local training process is: each fog node creates a localized reinforcement learning model to make cache content placement decisions, with input states during the transmission period beingWhereinRepresents fog node->All content cache states on the file; />Represented as +.>Content requested by each connected terminal. For example content- >Foggy node->Associated terminal equipment, thenOtherwise->. Assume that each action represents a way to cache content placement. Consider the whole network commonality->Content is provided for the terminal to request, and each fog node is at most cached +>The number of actions of the learning model isThe action space is +.>Wherein->The selection of the action means that the fog node will cache the content according to the cache condition corresponding to the action, such as +.>,/>Representing content to be cached +.>And otherwise, not caching.
During a transmission period, actions taken by the fog node are defined as. Each reinforcement learning model is used as an input of a deep Q network, and then outputs Q values corresponding to the actions, and then further selects the actions according to a greedy scheme. And then, each fog node caches corresponding content according to the action selection result.
Further, at the beginning of the content transfer period t+1, the current observation state is changed fromUpdated to->And intelligent body->Will be composed of->,/>,/>And->The composed interactive experience is stored in a playback experience pool. After multiple interactions with the environment, a small batch of experiences are randomized from the experience playback pool to update the weight parameters of the depth Q network and minimize the mean square error between the target Q value and the predicted Q, thus obtaining an optimal caching strategy.
3) Updating training model parameters, and obtaining own model parameters by the fog node through learning and trainingAfter that, its model is further updated +.>Wherein->Is the learning rate.
4) And uploading model updating, and uploading trained model parameters to a cloud computing center after each fog node completes local training.
5) And the cloud computing center updates the weighted parameters, and after the model parameters are uploaded, the cloud computing center further generates a new global model according to all the received model parameters. The above steps are iterated several times, and the final caching strategy can be obtained. For the cooperative relationshipCan thenBy means of the determined caching strategy->And judging that the fog nodes with the same cached content are determined to be a fog node cluster with a cooperative relationship.
In some embodiments, for an optimization sub-objective of fog node transmit power, determining a grid resource allocation result based on the optimization sub-objective and the constraint condition, comprises:
and solving an optimization sub-target of the fog node transmission power by adopting an acceleration algorithm based on Nesterov momentum, and determining an optimal solution of the fog node transmission power by adjusting the direction of each iteration to enable the first-order algorithm to quickly converge.
It should be noted that, for the transmit power sub-problem, it is not difficult to see that it is a complex non-convex problem, and it is difficult to directly solve. Particularly, the problem dimension is large, and the traditional optimization method represented by the interior point method relates to the inverse of the Hessen matrix, so that the calculation complexity is the third power of the problem dimension, and the method is difficult to be applied to a smart grid with mass access. In order to solve the problem, the embodiment of the invention firstly adopts a serial convex approximation method to convert the non-convex problem into the convex problem, and then solves the converted convex problem. For solving the conversion backward convex problem, the invention provides a low-complexity first-order algorithm to obtain the optimal solution of the problem. In order to further reduce the iteration times and accelerate the convergence rate, the invention further provides an acceleration algorithm based on Nesterov momentum, and the first-order algorithm is quickly converged to an optimal solution by adjusting the direction of each iteration, and the acceleration flow is as follows:
8. The method is used for preparing the product
9. The invention is applicable to
10. The method of the invention
11. The method of the invention
Wherein equation 8 represents a first derivative iterative system. Unlike the gradient descent method, the Nesterov acceleration method is thatWhere updates are made, equation 9 represents the acceleration system. Wherein the acceleration term is->. Select->The reason for the acceleration term is that the current iteration is most likely related to the direction of the previous iteration. Equations 10-11 represent damping systems. By iteratively adjusting the damping sparseness, the Nesterov acceleration method is ensured to be capable of obtaining acceleration without missing the optimal point. In the smart grid facing mass access, the terminal, the fog node and various calculation and communication resources are large in dimension, the calculation complexity can be reduced by means of the proposed first-order acceleration algorithm, the convergence speed of the algorithm is improved, the calculation time is reduced, and the real-time processing of power distribution is realized.
In some embodiments, determining the edge resource configuration result for a plurality of optimization sub-objectives and constraints includes:
performing alternate iterative optimization on a plurality of optimization sub-targets based on constraint conditions to obtain an association relation optimal solution, a cooperation relation optimal solution, a power grid data caching strategy optimal solution and a fog node transmission power optimal solution;
and determining an edge resource configuration result based on the optimal solution of each association relation, the optimal solution of the cooperation relation, the optimal solution of the power grid data caching strategy and the optimal solution of the fog node transmitting power.
It should be noted that, the method of the embodiment of the invention can enable the terminal in the power grid to be preferentially associated with the adjacent terminal and the cooperative cloud node, if the adjacent terminal and the cooperative cloud node cache the request content, the service is directly carried out, otherwise, the communication is carried out with the power grid data management center. Then, a content caching and communication model is established, and an optimization problem aiming at minimizing the data transmission delay is further established. Finally, a first-order acceleration algorithm based on distributed reinforcement learning and Nesterov momentum is designed to jointly optimize the cache and the communication resources. The method is characterized in that effective information sharing among power grid terminals and quick control instruction issuing meeting time delay constraint are realized by designing an edge cache strategy and a resource allocation method meeting information transmission time constraint, and further, the enabling of a honeycomb-free distributed network to the fusion network networking of the novel power system is completed.
The power fusion network-oriented big data edge cache and resource scheduling system provided by the invention is described below, and the power fusion network-oriented big data edge cache and resource scheduling system described below and the power fusion network-oriented big data edge cache and resource scheduling method described above can be referred to correspondingly. As shown in fig. 3, a system according to an embodiment of the present invention includes: the system comprises a power grid data management center, a plurality of fog nodes and a plurality of power grid terminals;
The power grid data management center is used for acquiring the association relation between each power grid terminal and each fog node, the cooperation relation between each fog node and the power grid data caching strategy of each fog node;
the method is also used for establishing a time delay model of the power grid data transmission task based on the association relation, the cooperation relation and the power grid data caching strategy, and taking the minimized time delay model as an optimization target;
and the optimization target is decomposed into a plurality of optimization sub-targets, and the power grid resource allocation result is determined based on the plurality of optimization sub-targets and the constraint condition.
According to the large data edge caching and resource scheduling system for the power fusion network, disclosed by the embodiment of the invention, the power grid data is cached in advance by utilizing an edge caching technology, so that the access pressure and the calculation cost of a power grid data management center are shared, and the transmission delay and the power consumption of the power grid business data are further reduced. The optimization problem with the aim of minimizing the data transmission delay is established, the cache strategy and the communication resource are respectively optimized by disassembling the optimization sub-targets, and the effective information sharing among the intelligent power grid terminals and the quick control instruction issuing meeting the delay constraint are realized.
In some embodiments, obtaining an association relationship between each power grid terminal and each fog node, a collaboration relationship between each fog node, and a power grid data caching policy of each fog node includes:
Acquiring a communication rule of a resource distribution system, and acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node based on the communication rule; wherein the communication rule includes:
the power grid terminal selects an associated fog node with an association relationship from the fog nodes based on a request of target power grid data;
under the condition that the associated fog node caches the target power grid data, the power grid terminal communicates with the associated fog node;
and under the condition that the associated fog node does not cache the target power grid data, the power grid terminal is communicated with the cooperative fog node with the associated fog node in cooperative relation or is communicated with a power grid data management center through a return link.
In some embodiments, the delay model includes:
the transmission delay between the power grid terminal and the associated fog node, the transmission delay between the associated fog node and the cooperative fog node and the transmission delay between the associated fog node and the power grid data management center.
In some embodiments, optimizing the sub-objective includes:
fixing the fog node transmitting power, the power grid data caching strategy and the cooperative relationship, and optimizing sub-targets of the association relationship;
fixing the power and association relation of the fog node, and optimizing a power grid data caching strategy and an optimization sub-objective of the cooperation relation;
And (3) fixing a power grid data caching strategy, an association relation and a cooperation relation, and optimizing an optimization sub-target of the fog node transmission power.
In some embodiments, for an optimization sub-objective of a grid data caching strategy and a collaboration relationship, determining a grid resource allocation result based on the optimization sub-objective and a constraint condition includes:
aiming at each fog node with an association relation with a power grid terminal, independently training the request data in each fog node and a service range by adopting a reinforcement learning algorithm, and determining an optimal solution of a power grid data caching strategy of each fog node;
and determining the fog node which has cached the same content as the target power grid data caching strategy as a target cooperative fog node based on the target power grid data caching strategy of each fog node aiming at each fog node which has an association relation with the power grid terminal, and determining the relation between the target cooperative fog node and each fog node as a cooperative relation optimal solution.
In some embodiments, for an optimization sub-objective of fog node transmit power, determining a grid resource allocation result based on the optimization sub-objective and the constraint condition, comprises:
and solving an optimization sub-target of the fog node transmission power by adopting an acceleration algorithm based on Nesterov momentum, and determining an optimal solution of the fog node transmission power by adjusting the direction of each iteration to enable the first-order algorithm to quickly converge.
In some embodiments, the constraints include:
fog node storage capacity constraint, fog node transmission power constraint, return link capacity constraint, power grid data caching strategy constraint, association relation value constraint and cooperation relation value constraint.
Fig. 4 illustrates a physical schematic diagram of an electronic device, as shown in fig. 4, which may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a power fusion network-oriented big data edge caching and resource scheduling method comprising:
acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the minimized time delay model as an optimization target;
and decomposing the optimization target into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the plurality of optimization sub-targets and constraint conditions.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention further provides a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer is capable of executing the method for edge caching and resource scheduling of big data for a power fusion network provided by the methods, where the method includes:
Acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the minimized time delay model as an optimization target;
and decomposing the optimization target into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the plurality of optimization sub-targets and constraint conditions.
In still another aspect, the present invention further provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program being implemented when executed by a processor to perform the method for edge caching and resource scheduling of big data for a power fusion network provided by the above methods, the method comprising:
acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the minimized time delay model as an optimization target;
and decomposing the optimization target into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the plurality of optimization sub-targets and constraint conditions.
The apparatus embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product, which may be stored in a computer-readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the various embodiments or methods of some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. The power fusion network-oriented big data edge caching and resource scheduling method is characterized by comprising the following steps of:
acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node;
based on the association relation, the cooperation relation and the power grid data caching strategy, establishing a time delay model of a power grid data transmission task, and taking the time delay model with minimized time delay as an optimization target;
decomposing the optimization targets into a plurality of optimization sub-targets, and determining a power grid resource allocation result based on the optimization sub-targets and constraint conditions.
2. The method for power fusion network-oriented big data edge caching and resource scheduling according to claim 1, wherein the obtaining the association relationship between each power grid terminal and each fog node, the cooperation relationship between each fog node and the power grid data caching policy of each fog node comprises:
acquiring a communication rule of a resource distribution system, and acquiring an association relation between each power grid terminal and each fog node, a cooperation relation between each fog node and a power grid data caching strategy of each fog node based on the communication rule; wherein the communication rule includes:
the power grid terminal selects an associated fog node with an association relationship from the fog nodes based on a request of target power grid data;
under the condition that the associated cloud node caches the target power grid data, the power grid terminal communicates with the associated cloud node;
and under the condition that the associated cloud node does not cache the target power grid data, the power grid terminal is communicated with a cooperative cloud node with a cooperative relationship with the associated cloud node or is communicated with a power grid data management center through a return link.
3. The power fusion network-oriented big data edge caching and resource scheduling method according to claim 2, wherein the delay model comprises:
And the transmission delay between the power grid terminal and the associated fog node, the transmission delay between the associated fog node and the cooperative fog node and the transmission delay between the associated fog node and the power grid data management center are all the same.
4. A method for edge caching and resource scheduling of big data for a power fusion network according to any one of claims 1 to 3, wherein the optimizing sub-objective comprises:
fixing fog node transmitting power, the power grid data caching strategy and the cooperative relationship, and optimizing sub-targets of the association relationship;
fixing the fog node transmitting power and the association relation, and optimizing the power grid data caching strategy and the optimization sub-objective of the cooperation relation;
and fixing the power grid data caching strategy, the association relation and the cooperation relation, and optimizing the optimization sub-objective of the fog node transmitting power.
5. The power fusion network-oriented big data edge caching and resource scheduling method according to claim 4, wherein, for the optimization sub-objective of the grid data caching policy and the cooperative relationship, the determining the grid resource allocation result based on the plurality of optimization sub-objectives and constraint conditions includes:
Aiming at each fog node with an association relation with the power grid terminal, independently training the request data in each fog node and a service range by adopting a reinforcement learning algorithm, and determining an optimal solution of a power grid data caching strategy of each fog node;
and aiming at each fog node with an association relation with the power grid terminal, determining the fog node which is cached with the same content as the target power grid data caching strategy as a target cooperation fog node based on the target power grid data caching strategy of each fog node, and determining the relation between the target cooperation fog node and each fog node as a cooperation relation optimal solution.
6. The power fusion network-oriented big data edge caching and resource scheduling method according to claim 4, wherein for the optimization sub-objective of the fog node transmission power, the determining the power grid resource allocation result based on the plurality of optimization sub-objectives and constraint conditions includes:
and solving an optimization sub-target of the fog node transmission power by adopting an acceleration algorithm based on Nesterov momentum, and determining an optimal solution of the fog node transmission power by adjusting the direction of each iteration to enable the first-order algorithm to quickly converge.
7. A method for edge caching and resource scheduling of big data for a power fusion network according to any one of claims 1 to 3, wherein the constraint condition includes:
Fog node storage capacity constraint, fog node transmission power constraint, return link capacity constraint, power grid data caching strategy constraint, association relation value constraint and cooperation relation value constraint.
8. The utility model provides a big data edge buffering and resource scheduling system towards electric power integration network which characterized in that includes: the system comprises a power grid data management center, a plurality of fog nodes and a plurality of power grid terminals;
the power grid data management center is used for acquiring the association relation between each power grid terminal and each fog node, the cooperation relation between each fog node and the power grid data caching strategy of each fog node;
the method is also used for establishing a time delay model of the power grid data transmission task based on the association relation, the cooperation relation and the power grid data caching strategy, and taking the minimized time delay model as an optimization target;
and the optimization target is decomposed into a plurality of optimization sub-targets, and the power grid resource allocation result is determined based on the plurality of optimization sub-targets and the constraint condition.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the power fusion network oriented big data edge caching and resource scheduling method of any of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the power fusion network-oriented big data edge caching and resource scheduling method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310922352.1A CN116634388B (en) | 2023-07-26 | 2023-07-26 | Electric power fusion network-oriented big data edge caching and resource scheduling method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310922352.1A CN116634388B (en) | 2023-07-26 | 2023-07-26 | Electric power fusion network-oriented big data edge caching and resource scheduling method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116634388A true CN116634388A (en) | 2023-08-22 |
CN116634388B CN116634388B (en) | 2023-10-13 |
Family
ID=87610344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310922352.1A Active CN116634388B (en) | 2023-07-26 | 2023-07-26 | Electric power fusion network-oriented big data edge caching and resource scheduling method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116634388B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105376182A (en) * | 2015-11-30 | 2016-03-02 | 国网吉林省电力有限公司信息通信公司 | Power grid resource management and allocation method and system |
CN110149646A (en) * | 2019-04-10 | 2019-08-20 | 中国电力科学研究院有限公司 | A kind of smart grid method for managing resource and system based on time delay and handling capacity |
CN112925646A (en) * | 2021-03-12 | 2021-06-08 | 威胜信息技术股份有限公司 | Electric power data edge calculation system and calculation method |
CN114340016A (en) * | 2022-03-16 | 2022-04-12 | 北京邮电大学 | Power grid edge calculation unloading distribution method and system |
US20220197232A1 (en) * | 2020-12-21 | 2022-06-23 | Hitachi Energy Switzerland Ag | Power Grid Resource Allocation |
CN115865930A (en) * | 2022-08-29 | 2023-03-28 | 中国移动通信集团广东有限公司 | MEC dynamic adjustment method based on 5G power Internet of things |
-
2023
- 2023-07-26 CN CN202310922352.1A patent/CN116634388B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105376182A (en) * | 2015-11-30 | 2016-03-02 | 国网吉林省电力有限公司信息通信公司 | Power grid resource management and allocation method and system |
CN110149646A (en) * | 2019-04-10 | 2019-08-20 | 中国电力科学研究院有限公司 | A kind of smart grid method for managing resource and system based on time delay and handling capacity |
US20220197232A1 (en) * | 2020-12-21 | 2022-06-23 | Hitachi Energy Switzerland Ag | Power Grid Resource Allocation |
CN112925646A (en) * | 2021-03-12 | 2021-06-08 | 威胜信息技术股份有限公司 | Electric power data edge calculation system and calculation method |
CN114340016A (en) * | 2022-03-16 | 2022-04-12 | 北京邮电大学 | Power grid edge calculation unloading distribution method and system |
CN115865930A (en) * | 2022-08-29 | 2023-03-28 | 中国移动通信集团广东有限公司 | MEC dynamic adjustment method based on 5G power Internet of things |
Also Published As
Publication number | Publication date |
---|---|
CN116634388B (en) | 2023-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lin et al. | Task offloading for wireless VR-enabled medical treatment with blockchain security using collective reinforcement learning | |
CN109729528B (en) | D2D resource allocation method based on multi-agent deep reinforcement learning | |
Yang et al. | Joint multi-user computation offloading and data caching for hybrid mobile cloud/edge computing | |
CN110187973B (en) | Service deployment optimization method facing edge calculation | |
CN107766135B (en) | Task allocation method based on particle swarm optimization and simulated annealing optimization in moving cloud | |
WO2021017227A1 (en) | Path optimization method and device for unmanned aerial vehicle, and storage medium | |
CN111629380B (en) | Dynamic resource allocation method for high concurrency multi-service industrial 5G network | |
Duong et al. | From digital twin to metaverse: The role of 6G ultra-reliable and low-latency communications with multi-tier computing | |
CN111405569A (en) | Calculation unloading and resource allocation method and device based on deep reinforcement learning | |
Almutairi et al. | Delay-optimal task offloading for UAV-enabled edge-cloud computing systems | |
CN112650581A (en) | Cloud-side cooperative task scheduling method for intelligent building | |
CN114189892A (en) | Cloud-edge collaborative Internet of things system resource allocation method based on block chain and collective reinforcement learning | |
CN113660681B (en) | Multi-agent resource optimization method applied to unmanned aerial vehicle cluster auxiliary transmission | |
Zheng et al. | Digital twin empowered heterogeneous network selection in vehicular networks with knowledge transfer | |
CN115297171B (en) | Edge computing and unloading method and system for hierarchical decision of cellular Internet of vehicles | |
CN115278708B (en) | Mobile edge computing resource management method oriented to federal learning | |
CN116009677B (en) | Federal learning equipment end energy consumption optimization method based on Cell-Free mMIMO network | |
CN113312177B (en) | Wireless edge computing system and optimizing method based on federal learning | |
CN116489708B (en) | Meta universe oriented cloud edge end collaborative mobile edge computing task unloading method | |
Chua et al. | Resource allocation for mobile metaverse with the Internet of Vehicles over 6G wireless communications: A deep reinforcement learning approach | |
Jo et al. | Deep reinforcement learning‐based joint optimization of computation offloading and resource allocation in F‐RAN | |
CN116634388B (en) | Electric power fusion network-oriented big data edge caching and resource scheduling method and system | |
CN116542319A (en) | Self-adaptive federation learning method and system based on digital twin in edge computing environment | |
CN116827515A (en) | Fog computing system performance optimization algorithm based on blockchain and reinforcement learning | |
Hu et al. | Computation Offloading and Resource Allocation in IoT-Based Mobile Edge Computing Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |