CN116319522A - Multipath forwarding method and system in computing power network - Google Patents
Multipath forwarding method and system in computing power network Download PDFInfo
- Publication number
- CN116319522A CN116319522A CN202310249916.XA CN202310249916A CN116319522A CN 116319522 A CN116319522 A CN 116319522A CN 202310249916 A CN202310249916 A CN 202310249916A CN 116319522 A CN116319522 A CN 116319522A
- Authority
- CN
- China
- Prior art keywords
- computing
- node
- power
- network
- path
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004364 calculation method Methods 0.000 claims abstract description 127
- 230000005540 biological transmission Effects 0.000 claims abstract description 42
- 238000011002 quantification Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 57
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 235000008694 Humulus lupulus Nutrition 0.000 claims description 15
- 239000002131 composite material Substances 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 238000013468 resource allocation Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/22—Alternate routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/121—Shortest path evaluation by minimising delays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/101—Server selection for load balancing based on network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1012—Server selection for load balancing based on compliance of requirements or conditions with available server resources
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a multipath forwarding method and a multipath forwarding system in a computing power network, wherein the method comprises the following steps: acquiring the calculation force requirement and the time delay requirement of a calculation task, deleting a link which does not meet the transmission bandwidth requirement value of the calculation task and a calculation force node which does not meet the calculation force requirement quantification value of the calculation task in the calculation force network, calculating the comprehensive index of each remaining candidate calculation force node, screening a main target calculation force node and a backup target calculation force node based on the comprehensive index, calculating the shortest distance from the main target calculation force node and the backup target calculation force node to an entrance node, obtaining a main path and a backup path, receiving the calculation task when the link bandwidth resource, the calculation force resource, the main path and the backup path of the calculation force network meet the calculation task requirement, reserving calculation resources for the calculation task in the calculation force network, and updating the network state of the calculation force network. The invention can avoid congestion and packet loss of the calculation task in the calculation network, and provides deterministic guarantee for the transmission of the calculation task.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a multipath forwarding method and system in a power computing network.
Background
Edge computing is limited by server size, has limited computing power, and is often prone to network load imbalance where some computing nodes are overly heavy and some computing nodes are idle. In order to solve the problems, a computational power network technology is provided, so that a large number of computational tasks can be cooperatively processed by each computational node, when the computational power of one computational node closest to the computational task is insufficient or the load is excessive, the computational task can be selectively calculated on the computational node farther away, and the phenomena of queuing, packet loss and the like caused by excessive tasks of a certain node are avoided, so that the processing efficiency of the computational task and the resource utilization rate of a network are effectively improved.
However, in the existing power computing network, only intelligent optimization of route allocation and reasonable selection of power computing nodes are generally considered, but deterministic guarantee of a computing task in a transmission process is not considered, namely congestion and packet loss phenomena caused by insufficient link bandwidth often occur in the process of transmitting the computing task to the power computing nodes, and once a certain node of the network fails, the transmission of the computing task is greatly influenced, so that deterministic and stable transmission guarantee cannot be provided for the computing task.
Disclosure of Invention
In view of this, the embodiment of the invention provides a multipath forwarding method and a multipath forwarding system in a computing network, so as to solve the problem of congestion and packet loss caused by insufficient link bandwidth and the like in the transmission of computing tasks in the existing computing network.
An aspect of the present invention provides a multi-path forwarding method in a power computing network, the method being executed on a software defined network controller provided in the power computing network, the power computing network including a plurality of ingress nodes, a plurality of forwarding nodes, and a plurality of power computing nodes, the software defined network controller being configured to obtain, in real time, a network link state of the power computing network and a resource usage of the power computing nodes, the method comprising the steps of:
obtaining a computing task received by an entry node, wherein the attribute of the computing task comprises: the method comprises the steps of enabling an entry node reached by the computing task, a transmission bandwidth requirement value of the computing task, a computing force requirement quantized value and maximum service processing time delay; acquiring the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing power network;
constructing the computational power network into a directed acyclic graph, wherein nodes in the directed acyclic graph represent the entry node, the forwarding node and the plurality of computational power nodes, edges in the directed acyclic graph represent links among the nodes, and the total bandwidth and the transmission delay are marked for each edge as attributes;
Comparing the residual bandwidth resources of each link with the transmission bandwidth requirement value, and deleting links which do not meet the transmission bandwidth requirement value; comparing the residual computational power resources of each computational power node with the computational power demand quantized value, deleting the computational power nodes which do not meet the computational power demand quantized value, and taking the residual computational power nodes as candidate computational power nodes;
calculating the hop count index of each candidate computing node according to the minimum hop count passing between the entry node and each candidate computing node, calculating the computing power index of each candidate computing node according to the residual computing power resource of each candidate computing node, and carrying out weighted summation on the hop count index and the computing power index of each candidate computing node to obtain a corresponding comprehensive index;
taking the candidate computing node with the maximum comprehensive index as a main target computing node;
searching a shortest path between the inlet node and the target computing node by adopting a Di Jie Style algorithm, and taking the shortest path as a main path;
calculating the load rate of the main target computing node, if the load rate of the main target computing node is smaller than or equal to a set threshold value, deleting a main path from the directed acyclic graph, and searching the shortest path between the inlet node and the target computing node as a backup path by adopting a Di Jie St-Lag algorithm; if the load rate of the main target computing node is larger than the set threshold, searching a backup target computing node with the load rate smaller than the set threshold and the maximum comprehensive index from the candidate computing nodes, and searching the shortest path between the inlet node and the backup target computing node by adopting a Di Jie St-Lag algorithm as a backup path;
And when the processing time delay of the main path and the backup path is smaller than or equal to the maximum processing time delay allowed by the computing task, the computing network receives and executes the computing task, and the transmission bandwidth requirement value and the computing power requirement quantification value of the computing task are reserved on the main path and the backup path so as to update the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing network.
In some embodiments, after finding the shortest path between the ingress node and the backup target computing node as a backup path using a dijkstra algorithm, the method further comprises:
and if the processing time delay of the backup path is greater than the maximum service processing time delay, after deleting the main path from the directed acyclic graph, searching the shortest path between the inlet node and the main target computing node by using a Di Jie St-Lag algorithm as the backup path.
In some embodiments, the processing delay is a sum of a forwarding delay of each node in the path and a calculation delay of the calculation node, and the processing delay calculation formula is:
wherein ,Dk Representing the time delay of the processing in question,representing the time delay of a router, t k,c Representing the computation delay, t k,f Representing the forwarding delay, L k Representing a transmission path, k representing the computational task, ij representing a link.
In some embodiments, updating the remaining bandwidth resources of each link and the remaining computing power resources of each computing power node in the computing power network comprises:
the remaining bandwidth resources of each link are updated as follows:
B ij =W ij -PB ij -BB ij ;
wherein ,Bij Representing the residual bandwidth resource of the link, W ij Representing total bandwidth resources of each link, PB ij BB represents the main path bandwidth resource used by the link ij Indicating backup path bandwidth resources that have been used by the link;
the remaining computing power resources of each computing power node are updated as follows:
K i =M i -N i -L i ;
wherein ,Ki Representing remaining computational resources of a computational power node, M i Representing the total calculation power resource of each calculation power node and N i Representing the computational power resources, L, used by the primary path at the computational power node i Representing the computing resources used by the backup path at the computing node.
In some embodiments, when the backup path is allocated to one computing task, the backup path occupies the corresponding link bandwidth resource, and when one link bandwidth is shared by two computing tasks, the remaining bandwidth resource of the link is set to 0.
In some embodiments, the load factor is calculated as:
wherein ,γi Which is indicative of the load factor in question,total computing resources representing computing nodes, +.>Representing the remaining computational power resources of the computational power node, i representing the ith computational power node.
In some embodiments, the hop count index is calculated as:
the calculation formula of the calculation power index is as follows:
wherein ,said hop count index representing the force node a,/->The calculation force index, J, representing the calculation force node A i Indicating the number of hops, J, that at least need to be taken from the entry node to the arbitrary computation node A Representing the minimum number of hops that need to be taken from said entry node to computing node a, +.>Representing the remaining computational resources of computational node A, < ->Representing the remaining computational resources of any computational node.
In some embodiments, the complex index is calculated by:
wherein ,representing the composite index, alpha k Representing the hop count index, beta k And the calculated force index is represented, and gamma is a non-negative constant less than 1.
In another aspect, the present invention also provides an electronic device, including a processor and a memory, where the memory stores computer instructions, and the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the apparatus implements the steps of the method described above.
In another aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
The invention has the advantages that:
the multipath forwarding method and system in the computing power network acquire the computing power requirement and the time delay requirement of the computing task, construct the main path and the backup path, accept the computing task when the link bandwidth resource, the computing power resource, the main path and the backup path processing time delay of the computing power network all meet the computing task requirement, and reserve the computing resource for the computing task in the computing power network. The invention distributes a backup path which is not intersected with the main path for the computing task while distributing the main path for the computing task, reserves communication resources on the backup path and computing power resources of backup target computing power nodes, and ensures the deterministic forwarding and processing of the computing task by transmitting the computing task through the backup path when the main path is congested or faulty.
Further, according to the distances from each candidate computing power node to the entrance node and the residual computing power resources of each candidate computing power node, the comprehensive index of each computing power node is calculated, and the candidate computing power node with the highest comprehensive index is selected as the main target computing power node or the backup target computing power node, so that when the main computing power target node and the backup computing power target node are selected for a computing task, the computing comprehensive index ensures that the service processing time meets the computing task requirement, and is beneficial to maintaining the load balance of the computing power network.
Furthermore, the computing task is accepted when the computing resource, the main path delay and the backup path delay of the computing network meet the requirements of the computing task, the congestion and packet loss phenomenon can not occur when the computing task accepted by the computing network is processed in the computing network through multi-layer screening, the stability of the computing network is ensured, and the user experience is improved.
Furthermore, the invention can improve the processing efficiency of the calculation task in the calculation power network, and simultaneously improve the whole resource utilization rate of the calculation power network, thereby improving the user satisfaction.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. In the drawings:
fig. 1 is a flowchart of an algorithm of a multipath forwarding method in a power network according to an embodiment of the present invention.
Fig. 2 is a general flow chart of a multi-path forwarding method in a power network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a computing network system architecture according to an embodiment of the invention.
Fig. 4 is a schematic diagram of a power network topology according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
With the development of network technology, more and more devices access the internet, and simultaneously generate a large amount of data and calculation tasks. Cloud computing breaks down huge computing tasks into a system with powerful computing power, which is composed of a plurality of servers, through a network 'cloud', processes and analyzes the computing tasks, and can complete the computing tasks in a short time and return computing results to users. Although cloud computing has the capability of rapidly processing a large amount of data, users need to upload the data to the cloud, and privacy security of the user data is difficult to be ensured. In addition, cloud computing is generally accessed through a remote network, and network latency or disruption easily has a large impact on cloud computing. The edge calculation sinks the calculation to the edge side, data is not required to be transmitted to the cloud through a network, the network service response is faster, and the privacy of a user can be better protected. However, edge computing is limited in computational power due to server size and the like, and is often prone to network load imbalance where some computing nodes are overly heavy and others are idle. In order to solve the above problems, a technology of a power network is proposed.
In the computing power network, the resources of the network can be mainly divided into computing power resources and communication resources, and along with the development of computing power sensing technology, the network can comprehensively sense the position, real-time state, load information and the like of the computing power resources and computing power services, and besides, the whole topology of the network and the state of each link are taken as communication resource information to be collected and updated in real time by a network controller. The prior art provides a system and a method for sensing and advertising computing power information in a computing power network, which can collect the sensed computing resource, storage resource and network resource information to generate computing power information and judge whether to advertise or not on the basis of saving link resources and reducing network overhead. However, in such a computing network, only intelligent optimization of route allocation and reasonable selection of computing nodes are generally considered, and deterministic guarantee of computing tasks in the transmission process is not considered. In the process of transmitting the calculation task to the calculation power node, congestion and packet loss phenomena caused by insufficient link bandwidth often occur, and once a certain node of the network fails, the transmission of the calculation task is greatly influenced, so that deterministic transmission guarantee cannot be provided for the calculation task. Therefore, the invention provides a multipath forwarding method and device in a computing power network, which solve the problem that the computing power network in the prior art cannot provide deterministic transmission guarantee for computing tasks.
One aspect of the present invention provides a multi-path forwarding method in a power computing network, where the method is executed on a software defined network controller disposed in the power computing network, the power computing network includes a plurality of ingress nodes, a plurality of forwarding nodes, and a plurality of power computing nodes, and the software defined network controller is configured to obtain, in real time, a network link state of the power computing network and a resource usage of the power computing nodes, as shown in fig. 1, and the method includes steps S101 to S108:
s101: the method comprises the steps of obtaining a computing task received by an entry node, wherein the attribute of the computing task comprises: calculating an entry node reached by a task, a transmission bandwidth requirement value of the task, a calculation force requirement quantization value and a maximum service processing time delay; and obtaining the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing power network.
S102: the computational power network is constructed as a directed acyclic graph, nodes in the directed acyclic graph represent ingress nodes, forwarding nodes and a plurality of computational power nodes, edges in the directed acyclic graph represent links between the nodes, and each edge is marked with total bandwidth and transmission delay as attributes.
S103: comparing the residual bandwidth resources of each link with the transmission bandwidth requirement value, and deleting links which do not meet the transmission bandwidth requirement value; and comparing the residual computational power resources of each computational power node with the computational power demand quantized values, deleting the computational power nodes which do not meet the computational power demand quantized values, and taking the residual computational power nodes as candidate computational power nodes.
S104: calculating the hop count index of each candidate computing power node according to the minimum hop count passed between the entry node and each candidate computing power node, calculating the computing power index of each candidate computing power node according to the residual computing power resource of each candidate computing power node, and carrying out weighted summation on the hop count index and the computing power index of each candidate computing power node to obtain a corresponding comprehensive index.
S105: and taking the candidate computing node with the largest comprehensive index as a main target computing node.
S106: and searching the shortest path between the inlet node and the main target computing node by adopting a Di Jie Style algorithm, and taking the shortest path as a main path.
S107: calculating the load rate of a main target computing node, if the load rate of the main target computing node is smaller than or equal to a set threshold value, deleting a main path from the directed acyclic graph, and searching the shortest path between an inlet node and the target computing node as a backup path by adopting a Di Jie St-Lag algorithm; if the load rate of the main target computing node is larger than the set threshold, searching the candidate computing node for the backup target computing node with the load rate smaller than the set threshold and the maximum comprehensive index, and searching the shortest path between the inlet node and the backup target computing node by using the Di Jie St algorithm as the backup path.
S108: and when the processing time delay of the main path and the backup path is smaller than or equal to the maximum processing time delay computing power network allowed by the computing task, receiving and executing the computing task, and reserving transmission bandwidth requirement values and computing power requirement quantification values of the computing task on the main path and the backup path so as to update the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing power network.
In step S101, as shown in fig. 3, the power network is composed of an ingress node, a forwarding node, a power node and an SDN controller (software defined network controller), where the SDN controller may sense network topology and link states, power resource states of the power node, and execute functions such as flow admission, path planning, resource allocation, configuration issuing, and the like according to the states of the power network, and meanwhile, the SDN controller collects transmission bandwidth and power requirements of a computing task, and determines whether the requirements of the computing task are met according to link residual bandwidth resources and residual power resources of the power network. The computing power node can provide corresponding computing service for computing tasks by utilizing a computing server deployed on the node, one computing power network is provided with a plurality of entrance nodes, the computing tasks reach the entrance nodes at certain time intervals and enter the computing power network from the entrance node closest to the entrance nodes, the entrance nodes receive the computing tasks, and the forwarding node forwards the computing tasks meeting the requirements to the computing nodes for computing.
In some embodiments, the attributes of a computing task may be expressed as:
S k (A k ,B k ,C k ,D kmax );
wherein ,Ak Inlet node, B, representing the computational network reached by computational task k k Representing the amount of bandwidth required for computing task k transmissions, C k Representing a quantized value of the computational power demand of the computational task k, D kmax Representing the maximum traffic processing delay allowed by the computing task k.
In some embodiments, the computing power network initially accepts the computing task when the remaining bandwidth resources of each link in the computing power network are greater than or equal to the computing task transmission bandwidth demand value and the remaining computing power resources of each computing power node of the computing power network are greater than or equal to the computing task computing power demand quantized value.
In step S102, the computational power network topology is represented by a directed acyclic graph, which may be represented by a directed graph g= (V, E), where V represents a nodeSet, E, represents a set of links, each link having a total bandwidth of W ij The transmission delay of each link is t ij The method comprises the steps of carrying out a first treatment on the surface of the The number of all calculation force nodes is N, and ESP= { Esp is used 1 ,Esp 2 ,Esp 3 ,…,Esp N And } represents. As shown in fig. 4, the power network topology is composed of a terminal device, a routing node, a light-load power node and a heavy-load power node, and the SDN controller obtains information such as bandwidth requirements of computing tasks, power requirements and allowed maximum service processing delay. The link bandwidth and the resource usage of the computing nodes in the computing network can be seen from the directed acyclic graph.
In step S103, deleting links with residual bandwidth resources smaller than transmission requirements in the power network topology, deleting power calculation nodes with residual power resources smaller than quantized values of power calculation requirements, deleting links and power calculation nodes which do not meet the requirements of the calculation tasks, and avoiding planning into the nodes of the calculation tasks when planning paths, thereby influencing the transmission of the calculation tasks.
In some embodiments, before deleting links and computing nodes in the computing network topology that do not meet the computing task requirements, further comprising: the computing power network judges whether to accept the computing task according to the residual bandwidth resource and the residual computing power resource, if the computing power network meets the transmission bandwidth requirement value and the computing power requirement quantification value of the computing task according to the residual bandwidth resource and the residual computing power resource, the computing task is preliminarily accepted, otherwise, the computing task is refused. And after the computing power network preliminarily receives the computing task, deleting links and computing power nodes which do not meet the requirements of the computing task in the computing power network topology, and planning a forwarding path for the computing task.
In step S104, the hop count is the number of routers through which the computation task passes from the ingress node to the computation node.
In some embodiments, the hop count index is calculated by:
The calculation formula of the calculation power index is as follows:
wherein ,hop count index indicating force node a, +.>Calculation force index, J, representing calculation force node A i Indicating the number of hops, J, that at least need to be taken from an ingress node to an arbitrary computational power node A Representing the minimum number of hops that need to be taken from the entry node to the computing node a, +.>Representing the remaining computational resources of computational node A, < ->Representing the remaining computational resources of any computational node. The fewer the number of hops between the computing force node and the entry node, the higher the number of hops index; the more remaining computational resources of the computational nodes, the higher the computational power index.
In some embodiments, the complex index is calculated by:
wherein ,representing the composite index, alpha k Representing the hop count index, beta k Representing the calculated force index, γ is a non-negative constant less than 1.
In step S105, the candidate computing node with the largest comprehensive index among the candidate computing nodes is found and used as the main target computing node for constructing the main path. The comprehensive indexes of the candidate computing power nodes are calculated, the advantages and disadvantages of the comprehensive performances of the candidate nodes can be obtained through comparison, and the candidate computing power node with the highest comprehensive index is used as a main target computing power node, namely, the computing task is guaranteed to be better calculated and transmitted at the computing power node.
In step S106, the dijkstra algorithm uses the shortest path from the entry node to the main target computing force node in the breadth-search directed acyclic graph, and uses the shortest path as the main path, so that the dijkstra algorithm is concise and can quickly obtain the optimal solution. When calculating the main path, γ is often set to a constant close to 1 to increase the influence degree of the hop index, and delay factors are prioritized. The comprehensive index of the candidate computing power node is calculated, so that the service processing time can be ensured to meet the requirements, and the load balance of the computing power network can be maintained. And after taking the computing power node with the highest comprehensive index as a main target computing power node, taking the transmission delay of each section of link as a measurement value of the link length, and calculating the shortest path from the entrance node to the main target computing power node by using a Dijiestra algorithm to obtain the main path of the computing task.
In some embodiments, the shortest distance between the primary target computing node and the ingress node is calculated using a bellman-ford algorithm to obtain the primary path.
In step S107, after planning a main path for each computing task, it is necessary to plan a backup path for each computing task to avoid that the computing task cannot calculate and transmit in time due to congestion of the main path in the computing process. When planning the backup path, the main path needs to be deleted from the directed acyclic graph, so as to avoid congestion caused by crossing the main path when constructing the backup path. After deleting the links and the computing nodes which do not meet the requirements from the directed acyclic graph, if only one candidate computing node has residual computing resources meeting the requirements, the target nodes of the main path and the backup path which are distributed subsequently are the candidate computing nodes. The threshold is set to be a percentage, the load rate of the computing force node is not more than a few percent of the total computing force resource of the computing force node, and the threshold can be modified according to requirements.
In some embodiments, the load factor is calculated as:
wherein ,γi The load factor is indicated as such,total computing resources representing computing nodes, +.>Representing the remaining computational power resources of the computational power node, i representing the ith computational power node. The load factor represents the percentage of the used computing power resources to the total computing power resources of the computing power node, and the used computing power resources are the sum of the computing power resources occupied by all computing tasks of the computing power node.
In step S108, after the computing network plans a main path and a backup path for the computing task, if the main path and the backup path both meet the time delay requirement of the computing task, the computing task is accepted, and computing resources are reserved for the computing task at the corresponding computing node, so that the forwarding node forwards the computing task to the computing node for computing. The SDN controller decides whether to allow the calculation task to calculate according to the judging conditions of the main path and the backup path, after the calculation task meets the requirements, the SDN controller updates the calculation resources of each link and each calculation node, and reserves the calculation resources for the calculation task, the calculation network allocates network resources for the calculation task in turn according to the arrival sequence of the calculation task, and each time the resource allocation of one calculation task is completed, the network resource state is updated to the SDN controller, and the next calculation task is started to be processed. The processing time delay of the calculation task comprises the following steps: the forwarding delay is the sum of the delays of all hops, and the calculating delay is the delay of the computing task processed by the computing node.
In some embodiments, the processing delay of the computing task is calculated as:
wherein ,Dk Indicating the processing time delay of the device,representing the time delay of each hop, t k,c Representing the calculation time delay, t k,f Representing the forwarding delay, L k Representing a transmission path, k representing a calculation task, ij representing a link. The processing delay of the computing task includes a forwarding delay and a computing delay.
In some embodiments, updating the remaining bandwidth resources of each link and the remaining power resources of each power node in the power network comprises:
the remaining bandwidth resources of each link are updated as follows:
B ij =W ij -PB ij -BB ij ;
wherein ,Bij Representing the residual bandwidth resource of the link, W ij Representing total bandwidth resources of each link, PB ij BB represents the main path bandwidth resource used by the link ij Indicating the backup path bandwidth resources that the link has used.
The remaining computing power resources of each computing power node are updated as follows:
K i =M i -N i -L i ;
wherein ,Ki Representing remaining computational resources of a computational power node, M i Representing the total calculation power resource of each calculation power node and N i Representing the computational power resources, L, used by the primary path at the computational power node i Representing the computing resources used by the backup path at the computing node.
In some embodiments, when a backup path is allocated to one computing task, the backup path occupies corresponding link bandwidth resources, and when one link bandwidth is shared by two computing tasks, the remaining bandwidth resources of the link are set to 0, that is, one link can only process two computing tasks simultaneously.
In some embodiments, when the processing delay of the main path is greater than the maximum processing delay allowed by the computing task, the computing task is refused, and when the processing delay is less than or equal to the maximum processing delay allowed by the computing task, the computing task is accepted, then whether the processing delay of the backup path meets the requirement of the computing task is judged, when the backup target computing node is the same as the main target computing node, if the processing delay of the backup path is less than or equal to the maximum processing delay allowed by the computing task, the computing task is accepted, otherwise, the computing task is refused; when the backup target computing power node is different from the main target computing power node, if the processing time delay of the backup path is smaller than or equal to the allowable maximum processing time delay, the computing task is accepted, if the processing time delay of the backup path is larger than the allowable maximum processing time delay, the backup target computing power nodes meeting the allowable maximum processing time delay requirement of the computing task are sequentially searched according to the descending order of the comprehensive index, if all the candidate computing power nodes do not meet the requirement, the load rate of the candidate computing power nodes is not considered, only the processing time delay is considered, and after the main path is deleted from the directed acyclic graph, the shortest path between the entrance node and the main target computing power node is searched by adopting a Dijiestra algorithm as the backup path. Because the load rate is not considered, the backup target computing node is the same as the main target computing node at the moment, and is the candidate computing node with the highest comprehensive index. And rejecting the computing task if the processing delay of the backup path still does not meet the maximum service processing delay requirement allowed by the computing task. Only when the main path and the backup path meet the maximum service processing time delay requirement allowed by the calculation task, the calculation task is forwarded to the corresponding target computing power node by the forwarding node for calculation.
The following description is provided in connection with one embodiment:
the invention provides a multipath forwarding method in a power computing network, which comprises the following steps S1 to S5 as shown in FIG. 2:
s1: in a computing power network scene, an SDN controller acquires a network link state and a resource use condition of a computing power node in real time, a computing task reaches an entry node (source node) of the network at a certain time interval, and the computing task needs to be forwarded to the computing power node (destination node) to process the computing task.
S2: based on the calculation task requirements of the step S1, all calculation tasks are accessed from the nearest entry node, and the SDN controller collects the time delay requirements and the calculation force requirements of the calculation tasks;
s3: for the calculation tasks meeting the requirements, distributing a main path and a backup path (the backup path is not intersected with the main path) for the calculation tasks meeting the requirements, reserving required calculation force resources at a destination node (calculation force node), and rejecting access for the calculation tasks not meeting the requirements;
s4: after the computing task reaches the entry node, based on the resource allocation method of the step S3, the controller makes a decision on whether to allow the access of the computing task according to the resource condition of the network;
s5: the network allocates network resources for the tasks in turn according to the arrival sequence of the tasks, and updates the network resource state to the SDN controller and starts to process the next computing task when the resource allocation of one computing task is completed.
The power network mainly comprises an entry node, a forwarding node, a power node and an SDN controller, and the network resources mainly comprise communication resources (bandwidth resources) and power resources. The computing power node can provide corresponding computing service for the user data by utilizing a computing server deployed on the node; the SDN controller can sense network topology and link state, the computing power resource state of the computing power node and the like, perform functions of flow admission, path planning, resource allocation, configuration issuing and the like according to the state of the network, and sense and update the change of the state of the network at any time.
The calculation task in step S1 may be represented as a quadruple S k (A k ,B k ,C k ,D kmax), wherein Ak Representing the ingress node, B, of the network where traffic k arrives k Representing the bandwidth size required for transmission of traffic k, C k Representing the quantized value of the computational power demand of the business k, D kmax Representing the maximum service processing delay allowed by service k.
Processing delay D of service k k From the forwarding delay t k,f And calculating time delay t k,c Two parts. Delay t of forwarding k,f The sum of time delays of all hops is related to a forwarding path of the service; calculating time delay t k,c And processing the time delay of the calculation task for the calculation node. Computing task admission network needs to meet D k =t k,f +t k,c ≤D kmax Otherwise the computing task will be rejected. Meanwhile, the residual bandwidth of the link needs to meet the task transmission requirement, the residual computing power resources in the network need to meet the task computing requirement, otherwise, the computing task is rejected.
In step S2, all calculation tasks enter the computing power network from the entry node, and are transmitted to the calculation server on the computing power node for calculation through route forwarding. The SDN controller senses and calculates network resources, makes decisions on admission and rejection of calculation task traffic, and issues decision results to the entry node for execution. Each computing node is provided with a computing server, the computing resources of the computing server are limited, and the number of the computing nodes is limited. When a plurality of calculation task requests enter a calculation network to calculate, communication resources and calculation resources are sequentially distributed for the calculation tasks according to the time sequence of the task flow reaching the inlet node.
In step S3, according to the network state perceived by the SDN, if the link bandwidth resource and the computing power resource of the network meet the task demand, planning a forwarding path of the task, and calculating a main path and a backup path, including the following steps:
s31: the SDN controller acquires parameters such as bandwidth requirement, calculation force requirement, maximum time delay and the like of the calculation task k;
S32: the computational power network topology may be represented by a directed graph g= (V, E), where V represents a set of nodes, E represents a set of links, and the total bandwidth of each link is W ij The transmission delay of each link is t ij The method comprises the steps of carrying out a first treatment on the surface of the The number of all calculation force nodes is N, and the set ESP= { Esp is used 1 ,Esp 2 ,Esp 3 ,…,Esp N And } represents. In the network topology, all links that do not meet the bandwidth requirement of the computing task k are deleted, and all the remaining links are denoted as { x ] k -a }; deleting all computing nodes which do not meet the computing resource requirement, wherein the rest computing nodes have N k And each.
S33: each computing node has a hop index alpha relative to task k k And a calculation power index beta k Calculate its comprehensive indexWherein γ is a constant less than 1. The fewer the number of hops between the computing force node and the entry node, the higher the number of hops index; the more remaining computational resources of the computational nodes, the higher the computational power index. And selecting the computing power node with the highest comprehensive index as a destination node.
S34: after a user selects a power calculation node, the transmission delay of each section of link is used as a measurement value of the link length, and Dijkstra (Dijkstra) algorithm is used for calculating the shortest path from an entrance node to the power calculation node, so that the main path of the calculation task can be obtained.
S35: in order to ensure that the main path and the backup path are not intersected, deleting the main path topology from the network topology when the backup path is calculated, and recalculating the comprehensive index of each computing nodeIf the comprehensive index->And if the highest computing power node load does not exceed a preset threshold H (for example, 80 percent, the value can be modified according to the requirement), selecting the computing power node as a destination node of the backup path, namely, the node as a primary node, taking the transmission delay of the link as a measurement value of the link length, and calculating by using a Dijkstra algorithm to obtain the backup path.
S36: if the calculated force load of the destination node of the selected backup path exceeds the threshold value H, selecting a comprehensive index with the load smaller than the threshold value HThe highest calculation force node is used as a destination node of the backup path, the node is called a secondary node, and the backup path is calculated by using Dijkstra algorithm.
In step S32, the number of all computing nodes in the network is N, using the set { Esp } 1 ,Esp 2 ,Esp 3 ,…,Esp N The } indicates that the network isThe ith computational force node of the business k plan can be expressed asi∈1,2,…,N。
the remaining computational power resource size is expressed as:
γ i for calculating force nodeI.e., the percentage of the used computing power resources to the total computing power resources of the node, the used computing power resources being the sum of the occupied computing power resources of all computing tasks of the computing power node.
γ i The calculation formula of (2) is as follows:
in the step S33 of the process of the present invention,the calculation formula of (2) is as follows:
wherein γ is a non-negative constant less than 1.
In the present embodiment, for two computing nodes A and B, if the slave is inThe port node reaches the arbitrary computation node Esp i (i=1, 2, …, N) at least the number of hops required to be taken is J i (i=1, 2, …, N), the minimum number of hops that need to be taken from the ingress node to the computing node a can be expressed as J A The minimum number of hops that need to be taken from the ingress node to the inode B may be denoted as J B The method comprises the steps of carrying out a first treatment on the surface of the If any force node Esp i The residual calculated force of (i=1, 2, …, N) is of the magnitude ofThe remaining power level of the power node a can be expressed as +.>The remaining power level of the power node B can be expressed as +.>
The calculation method of the node A, B hop count index comprises the following steps:
the calculation method of the node A, B calculation force index comprises the following steps:
the method for calculating the comprehensive index of the node A, B is as follows:
comparing the composite index of node AComprehensive index with node B>The size of the node (B) is preferably selected as a destination node with a larger value.
In step S36, if the latency of the backup path does not meet the task requirement, the backup path destination node is set to be the same as the main path, and the Dijkstra algorithm is used to recalculate the shortest path from the ingress node to the ingress node as the backup path. If the time delay of the backup path still does not meet the task requirement, the deployment fails; and if the task is not successful, the task is successfully accessed to the network, and the SDN controller updates the network resource state.
In step S4, the task is accepted if and only if the primary path and the backup path meet the requirements at the same time, otherwise the task is rejected. The method comprises the following specific steps:
s41: if the time delay of the main path is greater than the time delay requirement of the task, rejecting the task; if the main path delay meets the task delay requirement, deleting the main path from the network topological graph when calculating the backup path, and distributing a backup path for the main path through calculation.
S42: when the backup path destination node is a primary node, if the backup path delay meets the task delay requirement, the task is accepted, otherwise, the task is refused.
S43: when the destination node of the backup path is a secondary node, if the backup path delay meets the task delay requirement, the task is accepted; if the task time delay requirement is not met, sequentially searching for new secondary nodes according to the descending order of the comprehensive index until the secondary nodes meeting the task time delay requirement are found. And if the time delay of all the secondary nodes does not meet the task time delay requirement, changing the destination node of the backup path from the secondary node to the primary node, and repeating the processes of S33 and S42.
In particular, the computing task allows admitting the network with the primary path and the backup path required to satisfy the following constraints simultaneously:
C1:D k =t k,f +t k,c =≤D kmax ;
When a backup path is allocated to a task, the backup path occupies corresponding bandwidth resources. The total bandwidth of each link is W ij ,PB ij Indicating the bandwidth of the main path that has been used on the link, BB ij Representing the backup path bandwidth on the link, the remaining bandwidth B of the link ij =W ij -PB ij -BB ij It is specified that at most two tasks may share a portion of the bandwidth (if one link bandwidth is being shared by two tasks, then its remaining bandwidth is set to 0).
In step S5, all calculation tasks enter the computing power network from the entry node, and are transmitted to the calculation server on the computing power node for calculation through route forwarding. The SDN controller senses and calculates network resources, makes decisions on admission and rejection of calculation task traffic, and issues decision results to the entry node for execution. Each computing node is provided with a computing server, the computing resources of the computing server are limited, and the number of the computing nodes is limited. When a plurality of calculation task requests enter a calculation network to calculate, communication resources and calculation resources are sequentially distributed for the calculation tasks according to the time sequence of the task flow reaching the inlet node. After the controller finishes processing one calculation task, the network state can be updated in time, and the next calculation task request is started to be processed.
In summary, the multipath forwarding method and system in the power computing network in the invention acquire the power computing requirement and the time delay requirement of the computing task, construct the main path and the backup path, accept the computing task when the link bandwidth resource, the power computing resource, the main path and the backup path processing time delay of the power computing network all meet the requirement of the computing task, and reserve the computing resource for the computing task in the power computing network. The invention distributes a backup path which is not intersected with the main path for the computing task while distributing the main path for the computing task, reserves communication resources on the backup path and computing power resources of backup target computing power nodes, and ensures the deterministic forwarding and processing of the computing task by transmitting the computing task through the backup path when the main path is congested or faulty.
Further, according to the distances from each candidate computing node to the entrance node and the residual computing resources of each candidate computing node, the comprehensive index of each computing node is calculated, and the candidate computing node with the highest comprehensive index is selected as the main target computing node or the backup target computing node, so that when the main computing target node and the backup computing target node are selected for computing tasks, the computing comprehensive index ensures that the service processing time meets the computing task requirements, and is beneficial to maintaining the load balance of the computing network
Furthermore, the computing task is accepted when the computing resource, the main path delay and the backup path delay of the computing network meet the requirements of the computing task, the congestion and packet loss phenomenon can not occur when the computing task accepted by the computing network is processed in the computing network through multi-layer screening, the stability of the computing network is ensured, and the user experience is improved.
Furthermore, the invention can improve the processing efficiency of the calculation task in the calculation power network, and simultaneously improve the whole resource utilization rate of the calculation power network, thereby improving the user satisfaction.
In accordance with the above method, the present invention also provides a system comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method as described above when the computer instructions are executed by the processor.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the edge computing server deployment method described above. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A multi-path forwarding method in a power computing network, wherein the method is performed on a software defined network controller disposed on the power computing network, the power computing network including a plurality of ingress nodes, a plurality of forwarding nodes, and a plurality of power computing nodes, the software defined network controller configured to obtain, in real time, a network link state of the power computing network and a resource usage of the power computing nodes, the method comprising the steps of:
obtaining a computing task received by an entry node, wherein the attribute of the computing task comprises: the method comprises the steps of enabling an entry node reached by the computing task, a transmission bandwidth requirement value of the computing task, a computing force requirement quantized value and maximum service processing time delay; acquiring the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing power network;
Constructing the computational power network into a directed acyclic graph, wherein nodes in the directed acyclic graph represent the entry node, the forwarding node and the plurality of computational power nodes, edges in the directed acyclic graph represent links among the nodes, and the total bandwidth and the transmission delay are marked for each edge as attributes;
comparing the residual bandwidth resources of each link with the transmission bandwidth requirement value, and deleting links which do not meet the transmission bandwidth requirement value; comparing the residual computational power resources of each computational power node with the computational power demand quantized value, deleting the computational power nodes which do not meet the computational power demand quantized value, and taking the residual computational power nodes as candidate computational power nodes;
calculating the hop count index of each candidate computing node according to the minimum hop count passing between the entry node and each candidate computing node, calculating the computing power index of each candidate computing node according to the residual computing power resource of each candidate computing node, and carrying out weighted summation on the hop count index and the computing power index of each candidate computing node to obtain a corresponding comprehensive index;
taking the candidate computing node with the maximum comprehensive index as a main target computing node;
searching a shortest path between the inlet node and the main target computing node by adopting a Di Jie Style algorithm, and taking the shortest path as a main path;
Calculating the load rate of the main target computing node, if the load rate of the main target computing node is smaller than or equal to a set threshold value, deleting a main path from the directed acyclic graph, and searching the shortest path between the inlet node and the target computing node as a backup path by adopting a Di Jie St-Lag algorithm; if the load rate of the main target computing node is larger than the set threshold, searching a backup target computing node with the load rate smaller than the set threshold and the maximum comprehensive index from the candidate computing nodes, and searching the shortest path between the inlet node and the backup target computing node by adopting a Di Jie St-Lag algorithm as a backup path;
and when the processing time delay of the main path and the backup path is smaller than or equal to the maximum processing time delay allowed by the computing task, the computing network receives and executes the computing task, and the transmission bandwidth requirement value and the computing power requirement quantification value of the computing task are reserved on the main path and the backup path so as to update the residual bandwidth resources of each link and the residual computing power resources of each computing power node in the computing network.
2. The method of multi-path forwarding in a power network according to claim 1, wherein after searching for a shortest path between the ingress node and the backup target power node as a backup path using a dijkstra algorithm, the method further comprises:
And if the processing time delay of the backup path is greater than the maximum service processing time delay, after deleting the main path from the directed acyclic graph, searching the shortest path between the inlet node and the main target computing node by using a Di Jie St-Lag algorithm as the backup path.
3. The method for multi-path forwarding in a power computing network according to claim 1, wherein the processing delay is a sum of a forwarding delay of each node in a path and a computing delay of a power computing node, and the processing delay is calculated by a formula:
4. The method of multipath forwarding in a computing power network of claim 1, wherein updating the remaining bandwidth resources of each link and the remaining computing power resources of each computing power node in the computing power network comprises:
the remaining bandwidth resources of each link are updated as follows:
B ij =W ij -PB ij -BB ij ;
wherein ,Bij Representing the residual bandwidth resource of the link, W ij Representing total bandwidth resources of each link, PB ij BB represents the main path bandwidth resource used by the link ij Indicating backup path bandwidth resources that have been used by the link;
the remaining computing power resources of each computing power node are updated as follows:
K i =M i -N i -L i ;
wherein ,Ki Representing remaining computational resources of a computational power node, M i Representing the total calculation power resource of each calculation power node and N i Representing the computational power resources, L, used by the primary path at the computational power node i Representing the computing resources used by the backup path at the computing node.
5. The method according to claim 4, wherein when the backup path is allocated to one computing task, the backup path occupies the corresponding link bandwidth resource, and when one link bandwidth is shared by two computing tasks, the remaining bandwidth resource of the link is set to 0.
6. The method for multipath forwarding in a power network according to claim 1, wherein the load factor is calculated by the formula:
7. The method for multipath forwarding in a power network according to claim 4 wherein the hop index is calculated by:
the calculation formula of the calculation power index is as follows:
wherein ,said hop count index representing the force node a,/->The calculation force index, J, representing the calculation force node A i Indicating the number of hops, J, that at least need to be taken from the entry node to the arbitrary computation node A Representing the minimum number of hops that need to be taken from said entry node to computing node a, +.>Representing the remaining computational resources of computational node A, < ->Representing the remaining computational resources of any computational node.
8. The method for multi-path forwarding in a power network according to claim 7, wherein the calculation formula of the composite index is:
9. A multi-path forwarding system in a computing power network comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, which when executed by the processor, implement the steps of the method of any of claims 1 to 8.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310249916.XA CN116319522A (en) | 2023-03-15 | 2023-03-15 | Multipath forwarding method and system in computing power network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310249916.XA CN116319522A (en) | 2023-03-15 | 2023-03-15 | Multipath forwarding method and system in computing power network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116319522A true CN116319522A (en) | 2023-06-23 |
Family
ID=86812738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310249916.XA Pending CN116319522A (en) | 2023-03-15 | 2023-03-15 | Multipath forwarding method and system in computing power network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116319522A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118158092A (en) * | 2024-05-11 | 2024-06-07 | 中移(苏州)软件技术有限公司 | Computing power network scheduling method and device and electronic equipment |
-
2023
- 2023-03-15 CN CN202310249916.XA patent/CN116319522A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118158092A (en) * | 2024-05-11 | 2024-06-07 | 中移(苏州)软件技术有限公司 | Computing power network scheduling method and device and electronic equipment |
CN118158092B (en) * | 2024-05-11 | 2024-08-02 | 中移(苏州)软件技术有限公司 | Computing power network scheduling method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Woldeyohannes et al. | ClusPR: Balancing multiple objectives at scale for NFV resource allocation | |
CN107332913A (en) | A kind of Optimization deployment method of service function chain in 5G mobile networks | |
Tizghadam et al. | Betweenness centrality and resistance distance in communication networks | |
CN107454019B (en) | Dynamic bandwidth allocation method, device, equipment and storage medium for software defined network | |
CN108881207B (en) | Network security service realization method based on security service chain | |
US20070076615A1 (en) | Non-Blocking Destination-Based Routing Networks | |
WO2017117951A1 (en) | Virtual mapping method and device | |
WO2023024219A1 (en) | Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network | |
JP2007104677A (en) | Node delay prediction method and apparatus, and delay guarantee method and apparatus | |
Gvozdiev et al. | On low-latency-capable topologies, and their impact on the design of intra-domain routing | |
CN111245722B (en) | SDN data center network flow forwarding method based on genetic algorithm | |
CN113032096B (en) | SFC mapping method based on node importance and user demand dual perception | |
CN108156041A (en) | A kind of differentiation virtual optical network mapping method perceived based on safety | |
CN108600103A (en) | The ant group algorithm of more QoS route restrictions of oriented multilayer grade network | |
CN113300861B (en) | Network slice configuration method, device and storage medium | |
CN116319522A (en) | Multipath forwarding method and system in computing power network | |
JP2019514309A (en) | System and method for communication network service connectivity | |
CN111800352B (en) | Service function chain deployment method and storage medium based on load balancing | |
US20040233850A1 (en) | Device and a method for determining routing paths in a communication network in the presence of selection attributes | |
CN117749697A (en) | Cloud network fusion pre-scheduling method, device and system and storage medium | |
CN105263166A (en) | Priority-based wireless access control method for dual-path routing | |
JP2004336209A (en) | Traffic distribution control apparatus, and traffic distribution control method | |
JP6389811B2 (en) | Physical resource allocation device, physical resource allocation method, and program | |
CN108174446B (en) | Network node link resource joint distribution method with minimized resource occupancy | |
Chooprateep et al. | Video path selection for traffic engineering in SDN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |