CN115834466B - Method, device, equipment, system and storage medium for analyzing path of computing power network - Google Patents

Method, device, equipment, system and storage medium for analyzing path of computing power network Download PDF

Info

Publication number
CN115834466B
CN115834466B CN202211526944.3A CN202211526944A CN115834466B CN 115834466 B CN115834466 B CN 115834466B CN 202211526944 A CN202211526944 A CN 202211526944A CN 115834466 B CN115834466 B CN 115834466B
Authority
CN
China
Prior art keywords
node
server node
target routing
server
link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211526944.3A
Other languages
Chinese (zh)
Other versions
CN115834466A (en
Inventor
张力方
胡泽妍
王玉婷
刘桂志
李一喆
李宏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211526944.3A priority Critical patent/CN115834466B/en
Publication of CN115834466A publication Critical patent/CN115834466A/en
Application granted granted Critical
Publication of CN115834466B publication Critical patent/CN115834466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a method, a device, equipment, a system and a storage medium for analyzing a path of a computing power network, wherein the method comprises the following steps: the method comprises the following steps of: and determining the corresponding resource load scene type of the link between the server node at the current moment and the corresponding target routing node according to the resource occupancy rate and the threshold value of the link between the server node at the current moment and the corresponding target routing node, and determining the weight of the link between each server node at the current moment and each target routing node according to the resource occupancy rate of the link between each server node at the current moment and each target routing node, the resource occupancy rate, the weight and the threshold value of the link between each server node at the last moment and each target routing node. The method provided by the application can solve the problem of ultralow time delay requirement and simultaneously ensure that the calculation power resource is fully utilized.

Description

Method, device, equipment, system and storage medium for analyzing path of computing power network
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a method, a device, equipment, a system and a storage medium for analyzing a path of a computing power network.
Background
Along with the continuous development of artificial intelligence and mobile internet technology, a plurality of novel business applications are greatly emerging, often, the novel business applications generally need to consume huge computing resources, storage resources and energy consumption, the computing capacity of the intelligent terminal equipment is limited at present, the battery capacity is low, and the processing requirements of the novel business applications cannot be met. Therefore, cloud computing is proposed, which utilizes virtualization technology to build a very large-capacity computing resource pool, so that various applications can obtain required computing resources, storage resources, and software and platform services. Although the cloud computing meets the requirement of computationally intensive service processing, some applications have time delay sensitivity, and the transmission time delay from the terminal to the cloud cannot meet the requirement of the application on ultra-low time delay in many cases, so that an edge computing technology can be utilized.
However, the large-scale deployment of the edge computing device and the intelligent terminal device solves the problem of long time delay caused by uploading mass data to the cloud computing center in the network, but also causes the computing power resource to present the trend of ubiquitous deployment. On the one hand, the edge computing nodes do not perform effective cooperative processing tasks, the computing power resource of a single node can not meet the computing power resource requirement of an ultra-large computing intensive task such as image rendering, and the ultra-low time delay requirement problem of a novel service with both computing intensive and time delay sensitive characteristics can not be solved; on the other hand, although some edge computing nodes are overloaded and cannot effectively process computing tasks, because of unbalanced network load, some computing nodes still remain idle, so that computing power resources of the edge network cannot be fully utilized.
Therefore, the prior art cannot effectively analyze the path of the computational power network, so that the problem of ultralow time delay requirement cannot be solved, and the computational power resource is fully utilized.
Disclosure of Invention
The application provides a method, a device, equipment, a system and a storage medium for analyzing a path of a power network, which can solve the problem of ultralow time delay requirement and ensure that power resources are fully utilized.
In a first aspect, the present application provides a method for analyzing a path of a power network, which is applied to a power network system, including:
acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the previous moment and the weight of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node;
the method comprises the following steps of: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
Determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value;
and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
In one possible design, the threshold value includes a first threshold value and a second threshold value, the first threshold value being less than the second threshold value; the determining the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment comprises the following steps:
if the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource light load;
If the resource occupancy rate of the link between the server node and the corresponding target route is larger than or equal to a first threshold value and smaller than a second threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is the resource medium load;
and if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource reload.
In one possible design, the determining the weight of the link between each server node and each target routing node at the current time according to the resource load scenario type, the resource occupancy rate of the link between each server node and each target routing node at the current time, the resource occupancy rate of the link between each server node and each target routing node at the previous time, the weight of the link between each server node and each target routing node at the previous time, and the threshold value includes:
determining a weight calculation model according to the resource load scene type, the resource occupancy rate of a link between the server node and a corresponding target routing node at the current moment, the resource occupancy rate of a link between the server node and a corresponding target routing node at the previous moment and a threshold value for each server;
And obtaining the weight of the links between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the current moment, the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the previous moment and the weight of the links between each server node and each target routing node at the previous moment.
In one possible design, the determining a weight calculation model according to the resource load scenario type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment, and the threshold value includes:
according to the resource load scene type, the weight calculation model and the multiple of the weight coefficient used for calculating the link between the server node and the corresponding target routing node at the current moment are determined by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment with a threshold value.
In one possible design, the method further comprises:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into the network topology structure of the computing power network system.
In one possible design, the method further comprises:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computing power network according to the weight of the links between the two routing nodes on the same link and the weight of the links between the server nodes and the corresponding target routers.
In a second aspect, the present application provides a computing power network path analysis apparatus for use in a computing power network system including a plurality of server nodes and a plurality of routing nodes, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the last moment and the weight of links between each server node and each target routing node at the last moment, and the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node;
The determining module is used for executing the following steps aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
the path analysis module is used for determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight and the threshold value of the links between each server node and each target routing node at the previous moment;
and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
In a third aspect, the present application provides an electronic device, comprising: at least one processor and memory;
The memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory, causing the at least one processor to perform the method of computing a power network path analysis as described above in the first aspect and possible designs of the first aspect.
In a fourth aspect, the present application provides a computing power network system comprising: the electronic device, the plurality of server nodes, and the plurality of routing nodes of the third aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and when executed by a processor, implement the method for analyzing a path of a computing power network according to the first aspect and the possible designs of the first aspect.
The method, the device, the equipment, the system and the storage medium for analyzing the path of the power computing network are applied to the power computing network system, and the power computing network system comprises a plurality of server nodes and a plurality of routing nodes; firstly, acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the previous moment and the weight of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node; then, aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment, the following steps are executed: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value; and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network. Therefore, the method and the device for determining the minimum time delay of the service based on the weight of the server node and the target routing node acquire the resource occupancy rates respectively corresponding to the last moment and the current moment, then combine the weights of the server node and the target routing node at the last moment and the gate lower value to determine the weights of the server node and the target routing node at the current moment, comprehensively analyze the weights of the server node and the target routing node at the current moment, realize dynamic updating of the weights based on the resource occupancy conditions, further determine the shortest path resource selected for the service based on the updated weights, realize reasonable utilization of the resources, and solve the problem of the ultra-low time delay requirement based on the shortest path.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic view of a scenario of a method for analyzing paths of a computing power network according to an embodiment of the present application;
fig. 2 is a flow chart of a method for analyzing a path of a computing power network according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for analyzing paths of a computing power network according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a computing power network path analysis device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
At present, a large amount of edge computing equipment and intelligent terminal equipment are deployed, and although the problem of long time delay caused by uploading massive data to a cloud computing center in a network is solved, computing power resources are also caused to be in a ubiquitous deployment trend. On the one hand, the edge computing nodes do not perform effective cooperative processing tasks, the computing power resource of a single node can not meet the computing power resource requirement of an ultra-large computing intensive task such as image rendering, and the ultra-low time delay requirement problem of a novel service with both computing intensive and time delay sensitive characteristics can not be solved; on the other hand, although some edge computing nodes are overloaded and cannot effectively process computing tasks, because of unbalanced network load, some computing nodes still remain idle, so that computing power resources of the edge network cannot be fully utilized. Therefore, the prior art cannot effectively analyze the path of the computational power network, so that the problem of ultralow time delay requirement cannot be solved, and the computational power resource is fully utilized.
In order to solve the above problems, the technical idea of the present application is: the method comprises the steps of obtaining the corresponding resource occupancy rates of the last moment and the current moment respectively, then combining the weights of the server node and the target routing node at the last moment and the gate lower value to determine the weights of the server node and the target routing node at the current moment, comprehensively analyzing the weights of the server node and the target routing node at the current moment, realizing dynamic updating of the weights based on the resource occupancy condition, further determining the shortest path resource selected for the service based on the updated weights, realizing reasonable utilization of the resource, and simultaneously solving the problem of ultra-low time delay requirement based on the shortest path.
Term interpretation:
W RR_ij : a weight value between the routing node i and the routing node j;
W RN_ij_t : at time t (which may be the current time), a weight value between the routing node i and the computing server node (i.e., server node) j;
W RN_ij_t-1 : at time t-1 (which may be the last time), a weight value between the routing node i and the computing server node j;
R RN_ij_t : at time t, the resource occupancy rate between the routing node i and the calculation server node j;
R RN_ij_t-1 : at time t-1, the resource occupancy rate between the routing node i and the calculation server node j;
Th1 is a first threshold value;
th2 is a second threshold value;
alpha: weight coefficient.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of a method for analyzing a path of a computing power network according to an embodiment of the present application. Fig. 1 shows a power network system comprising a plurality of server nodes (e.g. server node 1, N1, server node 2, N2) and a plurality of routing nodes (e.g. routing node 1, R1, routing node 2, R2, routing node 3, R3, routing node 4, R4, routing node 5, R5, and routing node 6, R6). The routing nodes are used for network signal transmission, namely, service information initiated by the terminal equipment is transmitted through links among the routing nodes; the server node is used for providing corresponding business service according to the received business information initiated by the terminal equipment.
Dynamic awareness of server by considering server resource usage(here, the server nodes such as N1 and N2) and the router (here, the routing nodes such as R3 and R5) are used for the routing weight (the moment t is taken as an example, and the routing weight is W) RN_31_t 、W RN_52_t ) And (3) configuring, namely performing network calculation, so as to greatly improve the utilization rate of network resources. Wherein the weight of the links between the routing nodes (e.g., W RR_12 、W RR_13 、W RR_23 、W RR_24 、W RR_25 、W RR_35 、W RR_45 、W RR_46 、W RR_56 ) Is determined based on characteristics such as the length of a network signal line such as an optical fiber. Therefore, the weight of the server node and the weight of the target routing node at the current moment are comprehensively analyzed by acquiring the resource occupancy rates respectively corresponding to the last moment and the current moment and combining the weight of the server node and the target routing node at the last moment and the gate lower value, so that the dynamic update of the weight based on the resource occupancy condition is realized, the shortest path resource selected for the service is further determined based on the updated weight, the reasonable utilization of the resource is realized, and the problem of ultra-low time delay requirement is solved based on the shortest path
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 2, fig. 2 is a flow chart of a method for analyzing a path of a computing power network according to an embodiment of the present application.
Referring to fig. 2, the method for analyzing paths of a computing power network is applied to a computing power network system, wherein the computing power network system comprises a plurality of server nodes and a plurality of routing nodes; the method comprises the following steps:
s201, acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the previous moment and the weight of links between each server node and each target routing node at the previous moment, wherein each target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes.
Wherein one server node corresponds to one target routing node. I.e. one server node is directly connected to one target routing node.
In this embodiment, in order to dynamically update the weight of the link between the server and the router, the weight at the previous time, the resource occupation condition and the resource occupation condition corresponding to the current time may be obtained, and the weight at the current time may be recalculated.
S202, aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment, the following steps are executed: and determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment.
In this embodiment, for each server, due to different resource occupancy rates on links between the server node and the corresponding target routing node, the corresponding resource load scenario types of links between the server node and the corresponding target routing node at the current moment are different. Specifically, the resource occupancy rate of the link between the server node at the current moment and the corresponding target routing node is compared with a threshold value, and the type of the resource load scene on the link at the current moment is determined according to the comparison result.
S203, determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value.
And the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
The weight of the link between each server node and each target routing node at the current moment is the weight corresponding to the resource occupancy rate of the link between each server and each target routing node at the current moment. The more intense the resource, the greater the weight, and the path is bypassed as much as possible, so the shortest path which can reasonably utilize the resource can be calculated based on the weight.
In the embodiment, according to the determined resource load scene type, then determining the scene to which the weight calculation mode belongs; under the scene of the calculation mode, a weight calculation mode is determined based on the conditions of the resource occupancy rate of the links between each server and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the last moment, the weights of the links between each server node and each target routing node at the last moment, the threshold value and the like, and the weights of the links between each server node and each target routing node at the current moment are obtained. And then selecting a shortest path as a resource path for providing service initiated by the terminal equipment according to the weights of links between each server node and each target routing node at the current moment and combining the weights of links between the routing nodes, wherein the server nodes on the path are resources allocated to the service.
The method for analyzing the path of the power computing network is applied to a power computing network system, wherein the power computing network system comprises a plurality of server nodes and a plurality of routing nodes; firstly, acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the previous moment and the weight of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node; then, aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment, the following steps are executed: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value; and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network. Therefore, the method and the device for determining the minimum time delay of the service based on the weight of the server node and the target routing node acquire the resource occupancy rates respectively corresponding to the last moment and the current moment, then combine the weights of the server node and the target routing node at the last moment and the gate lower value to determine the weights of the server node and the target routing node at the current moment, comprehensively analyze the weights of the server node and the target routing node at the current moment, realize dynamic updating of the weights based on the resource occupancy conditions, further determine the shortest path resource selected for the service based on the updated weights, realize reasonable utilization of the resources, and solve the problem of the ultra-low time delay requirement based on the shortest path.
In one possible design, this embodiment describes in detail how to determine the resource load scenario type corresponding to the link between the server node and the corresponding destination routing node at the current time based on the foregoing embodiment. The threshold value comprises a first threshold value and a second threshold value, and the first threshold value is smaller than the second threshold value; the determining of the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment can be realized by the following steps:
step a1, if the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource light load;
step a2, if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a first threshold value and less than a second threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource medium load;
And a3, if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource reload.
In this embodiment, when 0.ltoreq.R RN_ij_t And when Th1 is less than, the resource light load is indicated, and the server Nj should be selected preferentially, namely the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the moment t is the resource light load.
When Th1 is less than or equal to R RN_ij_t And when Th2 is less than, the light load of the resource is indicated, and the selection of the server Nj has no tendency, namely, the type of the resource load scene corresponding to the link between the server node and the corresponding target routing node at the moment t (which can be regarded as the current moment) is the medium load of the resource.
When Th2 is less than or equal to R RN_ij_t When < 1, a resource reload is described, the selection of the server Nj, i.e. the link between the server node and the corresponding destination routing node at time t (which may be considered herein as the current time), should be avoided as resource reload.
In one possible design, this embodiment describes S203 in detail on the basis of the above embodiment. According to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the last moment, the weight and the threshold value of the links between each server node and each target routing node at the last moment, the weight of the links between each server node and each target routing node at the current moment can be determined, and the method can be realized by the following steps:
And b1, determining a weight calculation model according to the resource load scene type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last moment and a threshold value for each server.
In one possible design, the weight calculation model is determined according to the resource load scene type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment, and the threshold value, and the weight calculation model can be implemented by the following steps:
according to the resource load scene type, the weight calculation model and the multiple of the weight coefficient used for calculating the link between the server node and the corresponding target routing node at the current moment are determined by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment with a threshold value.
In the embodiment, if the resource load scene type is the light load of the resource, namely R is more than or equal to 0 RN_ij_t And less than Th1, the analysis can be performed by selecting the condition of the current moment, and then determining a weight calculation model.
Case 11: if R is RN_ij_t -R RN_ij_t-1 ≤0
1) When Th1 is less than or equal to R RN_ij_t-1 When < Th2, k is set to be an odd number, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
the multiple of the weight coefficient is 2, because the resource occupancy rate corresponding to the current moment is not greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the previous moment is between two threshold values, it is indicated that the weight of the link between the server node and the corresponding target routing node at the current moment is lower than the weight of the link between the server node and the corresponding target routing node at the previous moment, k is given to be an odd number, and the multiple of alpha is greater than 1, such as 2.
2) When Th2 is less than or equal to R RN_ij_t-1 When < 1, k is set to be an odd number, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+3α(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )]the method comprises the steps of carrying out a first treatment on the surface of the This is
The multiple of the inner weight coefficient is 3, since the resource occupancy rate corresponding to the current moment is not greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the previous moment is greater than the second threshold (a large threshold), which indicates that the weight of the link between the server node and the corresponding target routing node at the current moment is lower than the weight of the link between the server node and the corresponding target routing node at the previous moment, k is given as an odd number, and the multiple of α is greater than that described in (1) in case 21, such as 2.
Case 12: if R is RN_ij_t -R RN_ij_t-1 Setting k to be even number > 0, the weight calculation model is:
W RN_ij_t =W RN_ij_t-1 *[1+α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )]the method comprises the steps of carrying out a first treatment on the surface of the This is
The multiple of the inner weight coefficient is 1, because the resource occupancy rate corresponding to the current moment is greater than the resource occupancy rate corresponding to the last moment, and the resource occupancy rate corresponding to the current moment is not greater than a first threshold value, the weight of a link between the server node and the corresponding target routing node at the current moment is higher than the weight of a link between the server node and the corresponding target routing node at the last moment, k is given as an even number, and the multiple of alpha can be 1.
If the resource load scene type is the resource middling, namely Th1 is less than or equal to R RN_ij_t And less than Th2, the analysis can be performed by selecting the condition of the current moment, and then determining a weight calculation model.
Case 21: if R is RN_ij_t -R RN_ij_t-1 ≤0
1) When Th1 is less than or equal to R RN_ij_t-1 When < Th2, k is set to be an odd number, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )]the method comprises the steps of carrying out a first treatment on the surface of the This is
The multiple of the inner weight coefficient is 1, since the resource occupancy rate corresponding to the current moment is not greater than the resource occupancy rate corresponding to the last moment, and the resource occupancy rate corresponding to the last moment is between two threshold values, it is indicated that the weight of the link between the server node and the corresponding target routing node at the current moment is lower than the weight of the link between the server node and the corresponding target routing node at the last moment, k is given to be an odd number, and the multiple of alpha can be 1.
2) When Th2 is less than or equal to R RN_ij_t-1 When < 1, k is set to be an odd number, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )]the method comprises the steps of carrying out a first treatment on the surface of the This is
The multiple of the inner weight coefficient is 2, because the resource occupancy rate corresponding to the current moment is not greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the previous moment is greater than the second threshold (a large threshold), which indicates that the weight of the link between the server node and the corresponding target routing node at the current moment is lower than the weight of the link between the server node and the corresponding target routing node at the previous moment, k is given as an odd number, and the multiple of α is greater than that of α in case 21, such as 2.
Case 22: if R is RN_ij_t -R RN_ij_t-1 >0
1) When 0 is less than or equal to R RN_ij_t-1 In the case of a value of < Th1,setting k as an even number, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
the multiple of the weight coefficient is 2, because the resource occupancy rate corresponding to the current moment is greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the current moment is less than the first threshold value, the weight of the link between the server node and the corresponding target routing node at the current moment is higher than the weight of the link between the server node and the corresponding target routing node at the previous moment, at this time, k is given as an even number, and the multiple of alpha can be 2.
2) When Th1 is less than or equal to R RN_ij_t-1 W when Th2 is less RN_ij_t =W RN_ij_t -1。
If the resource load scene type is resource reload, namely Th2 is less than or equal to R RN_ij_t And less than 1, analyzing the following two conditions, selecting the condition of the current moment, and further determining a weight calculation model.
Case 31: if R is RN_ij_t -R RN_ij_t -1 is less than or equal to 0, then W RN_ij_t =W RN_ij_t-1
Case 32: if R is RN_ij_t -R RN_ij_t-1 >0
1) When 0 is less than or equal to R RN_ij_t-1 When < Th1, k is set to be even, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+2α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
the multiple of the weight coefficient is 2, because the resource occupancy rate corresponding to the current moment is greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the current moment is less than the first threshold value, the weight of the link between the server node and the corresponding target routing node at the current moment is higher than the weight of the link between the server node and the corresponding target routing node at the previous moment, at this time, k is given as an even number, and the multiple of alpha can be 2.
2) When Th1 is less than or equal toR RN_ij_t-1 When < Th2, k is set to be even, and the weight calculation model is as follows:
W RN_ij_t =W RN_ij_t-1 *[1+3α*(-1) k (|R RN_ij_t -R RN_ij_t-1 |/R RN_ij_t )];
the multiple of the weight coefficient is 3, because the resource occupancy rate corresponding to the current moment is greater than the resource occupancy rate corresponding to the previous moment, and the resource occupancy rate corresponding to the previous moment is between two threshold values, which indicates that the weight of the link between the server node and the corresponding target routing node at the current moment is higher than the weight of the link between the server node and the corresponding target routing node at the previous moment, k is given as an even number, and the multiple of α in the case 32 is greater than the multiple of α in the case (1), such as 3.
And b2, obtaining the weight of the links between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the current moment, the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the previous moment and the weight of the links between each server node and each target routing node at the previous moment.
In this embodiment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment, and the weight of the link between each server node and each target routing node at the previous moment are input into the weight calculation model, so as to obtain the weight of the link between each server node and each target routing node at the current moment.
In one possible design, this embodiment may be further implemented by the following steps based on the above embodiment:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into the network topology structure of the computing power network system.
In this embodiment, after the network topology weight is set, the weight is dynamically updated. And then, based on the updated weight, performing resource allocation, and selecting the shortest path. The more intense the resource, the greater the weight, and the path is bypassed as much as possible, so the shortest path which can reasonably utilize the resource can be calculated based on the weight.
In one possible design, the method may also be implemented by:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computing power network according to the weight of the links between the two routing nodes on the same link and the weight of the links between the server nodes and the corresponding target routers.
In this embodiment, the weight of the links between two routing nodes on the same link among the plurality of routing nodes and the weight of the links between each of the server nodes and the corresponding target router are combined, the server resource to be allocated is determined, and then the shortest path is determined from all the links from the terminal device to the selected server node. The shortest path calculation is performed by Dijkstra algorithm, which is not specifically limited herein.
Specifically, referring to fig. 3, fig. 3 is a flow chart of a method for analyzing a path of a computing power network according to still another embodiment of the present application. The method comprises the steps of obtaining the resource occupancy rates respectively corresponding to the last moment and the current moment, determining the weights of the server node and the target routing node at the current moment by combining the weights of the server node and the target routing node at the last moment and the gate lower value, comprehensively analyzing the weights of the server node and the target routing node at the current moment, and determining the shortest path of the power calculation network based on the weights of links between two routing nodes on the same link and the weights of links between the server node and the corresponding target router. The method and the device realize dynamic updating of the weights based on the resource occupation condition, further determine a shortest path resource selected for the service based on the updated weights, realize reasonable utilization of the resources, and solve the problem of ultra-low time delay requirements based on the shortest paths.
In order to implement the method for analyzing the path of the power network, the embodiment provides a device for analyzing the path of the power network. Referring to fig. 4, fig. 4 is a schematic structural diagram of a computing power network path analysis device according to an embodiment of the present application; the computing power network path analysis device 40 is applied to a computing power network system including a plurality of server nodes and a plurality of routing nodes, and includes:
The obtaining module 401 is configured to obtain a resource occupancy rate of a link between each server node and each target routing node at a current time, a resource occupancy rate of a link between each server node and each target routing node at a previous time, and a weight of a link between each server node and each target routing node at a previous time, where the target routing node is a routing node directly connected to each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node;
the determining module 402 is configured to perform, for each of the resource occupancy rates of links between each of the server nodes and each of the target routing nodes at the current time, the following steps: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
the path analysis module 403 is configured to determine the weight of the link between each server node and each target routing node at the current moment according to the resource load scenario type, the resource occupancy rate of the link between each server node and each target routing node at the current moment, the resource occupancy rate of the link between each server node and each target routing node at the previous moment, the weight of the link between each server node and each target routing node at the previous moment, and the threshold value;
And the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
The obtaining module 401, the determining module 402, and the path analyzing module 403 are configured to obtain a resource occupancy rate of a link between each server node and each target routing node at a current moment, a resource occupancy rate of a link between each server node and each target routing node at a previous moment, and a weight of a link between each server node and each target routing node at a previous moment, where the target routing node is a routing node directly connected to each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node; then, aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment, the following steps are executed: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment; determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value; and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network. Therefore, the method and the device for determining the minimum time delay of the service based on the weight of the server node and the target routing node acquire the resource occupancy rates respectively corresponding to the last moment and the current moment, then combine the weights of the server node and the target routing node at the last moment and the gate lower value to determine the weights of the server node and the target routing node at the current moment, comprehensively analyze the weights of the server node and the target routing node at the current moment, realize dynamic updating of the weights based on the resource occupancy conditions, further determine the shortest path resource selected for the service based on the updated weights, realize reasonable utilization of the resources, and solve the problem of the ultra-low time delay requirement based on the shortest path.
The device provided in this embodiment may be used to implement the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
In one possible design, the threshold value includes a first threshold value and a second threshold value, the first threshold value being less than the second threshold value; the determining module is specifically configured to:
when the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource light load;
when the resource occupancy rate of the link between the server node and the corresponding target route is larger than or equal to a first threshold value and smaller than a second threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource medium load;
and when the resource occupancy rate of the link between the server node and the corresponding target route is larger than or equal to a second threshold value and smaller than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource reload.
In one possible design, the path analysis module includes: a weight calculation model determination unit and a weight determination unit;
the weight calculation model determining unit is used for determining a weight calculation model according to the resource load scene type, the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the link between the server node and the corresponding target routing node at the last moment and a threshold value for each server;
the weight determining unit is used for obtaining the weights of the links between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the links between the server node and the corresponding target routing node at the current moment, the resource occupancy rate of the links between the server node and the corresponding target routing node at the last moment and the weights of the links between each server node and each target routing node at the last moment.
In one possible design, the weight calculation model determining unit is specifically configured to:
according to the resource load scene type, the weight calculation model and the multiple of the weight coefficient used for calculating the link between the server node and the corresponding target routing node at the current moment are determined by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment with a threshold value.
In one possible design, the apparatus further comprises: a weight updating module; the weight updating module is used for:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into the network topology structure of the computing power network system.
In one possible design, the path analysis module is further configured to:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computing power network according to the weight of the links between the two routing nodes on the same link and the weight of the links between the server nodes and the corresponding target routers.
In order to realize the method for analyzing the path of the power network, the embodiment provides electronic equipment. Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 50 of the present embodiment includes: at least one processor 501 and memory 502; wherein, the memory 502 is used for storing computer execution instructions; at least one processor 501 for executing computer-executable instructions stored in memory to perform the steps described in the embodiments above. Reference may be made in particular to the relevant description of the embodiments of the method described above.
The embodiment of the application also provides a computing power network system, which comprises: an electronic device, a plurality of server nodes and a plurality of routing nodes as described above.
The embodiment of the application also provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and when a processor executes the computer execution instructions, the method for analyzing the path of the computing power network is realized.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms. In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may exist alone physically, or two or more modules may be integrated in one unit. The units formed by the modules can be realized in a form of hardware or a form of hardware and software functional units.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods described in the embodiments of the present application. It should be understood that the above processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus. The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). It is also possible that the processor and the storage medium reside as discrete components in an electronic device or a master device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. The power calculation network path analysis method is characterized by being applied to a power calculation network system, wherein the power calculation network system comprises a plurality of server nodes and a plurality of routing nodes; the method comprises the following steps:
acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the previous moment and the weight of links between each server node and each target routing node at the previous moment, wherein the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node;
the method comprises the following steps of: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight of the links between each server node and each target routing node at the previous moment and the threshold value;
And the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
2. The method of claim 1, wherein the threshold value comprises a first threshold value and a second threshold value, the first threshold value being less than the second threshold value; the determining the resource load scene type corresponding to the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment comprises the following steps:
if the resource occupancy rate of the link between the server and the corresponding target route is smaller than a first threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is a resource light load;
if the resource occupancy rate of the link between the server node and the corresponding target route is larger than or equal to a first threshold value and smaller than a second threshold value, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is the resource medium load;
And if the resource occupancy rate of the link between the server node and the corresponding target route is greater than or equal to a second threshold value and less than 1, determining that the resource load scene type corresponding to the link between the server node and the corresponding target route node at the current moment is resource reload.
3. The method according to claim 2, wherein determining the weight of the links between each server node and each target routing node at the current time according to the resource load scenario type, the resource occupancy of the links between each server node and each target routing node at the current time, the resource occupancy of the links between each server node and each target routing node at the previous time, the weight of the links between each server node and each target routing node at the previous time, and the threshold value comprises:
determining a weight calculation model according to the resource load scene type, the resource occupancy rate of a link between the server node and a corresponding target routing node at the current moment, the resource occupancy rate of a link between the server node and a corresponding target routing node at the previous moment and a threshold value for each server;
And obtaining the weight of the links between each server node and each target routing node at the current moment through the weight calculation model according to the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the current moment, the resource occupancy rate of the links between the server node and the corresponding target routing nodes at the previous moment and the weight of the links between each server node and each target routing node at the previous moment.
4. A method according to claim 3, wherein said determining a weight calculation model according to the resource load scenario type, the resource occupancy of the link between the server node and the corresponding target routing node at the current time, the resource occupancy of the link between the server node and the corresponding target routing node at the previous time, and the threshold value comprises:
according to the resource load scene type, the weight calculation model and the multiple of the weight coefficient used for calculating the link between the server node and the corresponding target routing node at the current moment are determined by comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the current moment with the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment and comparing the resource occupancy rate of the link between the server node and the corresponding target routing node at the previous moment with a threshold value.
5. The method according to any one of claims 1-4, further comprising:
and updating the weight of the link between the server node and the corresponding target routing node at the current moment into the network topology structure of the computing power network system.
6. The method according to any one of claims 1-4, further comprising:
acquiring the weight of a link between two routing nodes on the same link in the plurality of routing nodes;
and determining the shortest path of the computing power network according to the weight of the links between the two routing nodes on the same link and the weight of the links between the server nodes and the corresponding target routers.
7. A computing power network path analysis apparatus for use in a computing power network system comprising a plurality of server nodes and a plurality of routing nodes, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the resource occupancy rate of links between each server node and each target routing node at the current moment, the resource occupancy rate of links between each server node and each target routing node at the last moment and the weight of links between each server node and each target routing node at the last moment, and the target routing node is a routing node which is directly connected with each server node in the plurality of routing nodes; wherein one server node corresponds to one target routing node;
The determining module is used for executing the following steps aiming at the resource occupancy rate of links between each server node and each target routing node at the current moment: determining the corresponding resource load scene type of the link between the server node and the corresponding target routing node at the current moment according to the resource occupancy rate and the threshold value of the link between the server node and the corresponding target routing node at the current moment;
the path analysis module is used for determining the weight of the links between each server node and each target routing node at the current moment according to the resource load scene type, the resource occupancy rate of the links between each server node and each target routing node at the current moment, the resource occupancy rate of the links between each server node and each target routing node at the previous moment, the weight and the threshold value of the links between each server node and each target routing node at the previous moment;
and the weight of the links between each server node and each target routing node at the current moment is used for determining the shortest path of the computing power network.
8. An electronic device, comprising: at least one processor and memory;
The memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of computational power network path analysis of any one of claims 1-6.
9. A computing power network system, comprising: the electronic device, as set forth in claim 8, a plurality of server nodes, and a plurality of routing nodes.
10. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the method of computational power network path analysis of any one of claims 1-6.
CN202211526944.3A 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network Active CN115834466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211526944.3A CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211526944.3A CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Publications (2)

Publication Number Publication Date
CN115834466A CN115834466A (en) 2023-03-21
CN115834466B true CN115834466B (en) 2024-04-16

Family

ID=85533396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211526944.3A Active CN115834466B (en) 2022-12-01 2022-12-01 Method, device, equipment, system and storage medium for analyzing path of computing power network

Country Status (1)

Country Link
CN (1) CN115834466B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040808B1 (en) * 2008-10-20 2011-10-18 Juniper Networks, Inc. Service aware path selection with a network acceleration device
WO2013042349A1 (en) * 2011-09-22 2013-03-28 日本電気株式会社 Device and method for determining allocation resources and resource provision system
WO2014185768A1 (en) * 2013-05-13 2014-11-20 Mimos Berhad A method of spectrum aware routing in a mesh network and a system derived thereof
CN113766544A (en) * 2021-09-18 2021-12-07 国网河南省电力公司信息通信公司 Multi-edge cooperation-based power Internet of things slice optimization method
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114867065A (en) * 2022-05-18 2022-08-05 中国联合网络通信集团有限公司 Base station computing force load balancing method, equipment and storage medium
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11395308B2 (en) * 2019-04-30 2022-07-19 Fujitsu Limited Monitoring-based edge computing service with delay assurance

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040808B1 (en) * 2008-10-20 2011-10-18 Juniper Networks, Inc. Service aware path selection with a network acceleration device
WO2013042349A1 (en) * 2011-09-22 2013-03-28 日本電気株式会社 Device and method for determining allocation resources and resource provision system
WO2014185768A1 (en) * 2013-05-13 2014-11-20 Mimos Berhad A method of spectrum aware routing in a mesh network and a system derived thereof
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN113766544A (en) * 2021-09-18 2021-12-07 国网河南省电力公司信息通信公司 Multi-edge cooperation-based power Internet of things slice optimization method
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114867065A (en) * 2022-05-18 2022-08-05 中国联合网络通信集团有限公司 Base station computing force load balancing method, equipment and storage medium
CN115396358A (en) * 2022-08-23 2022-11-25 中国联合网络通信集团有限公司 Route setting method, device and storage medium for computing power perception network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Joint server and route selection in SDN networks;Hasan anil akyildiz;《2017 IEEE International Black Sea Conference on Communications and Networking》;20180201;全文 *
面向算力网络的微服务调度策略研究与实现;戴鑫;《中国优秀硕士学位论文全文数据库》;20220615;全文 *

Also Published As

Publication number Publication date
CN115834466A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN112363813A (en) Resource scheduling method and device, electronic equipment and computer readable medium
CN111176792A (en) Resource scheduling method, device and related equipment
CN111614746A (en) Load balancing method and device of cloud host cluster and server
US10983828B2 (en) Method, apparatus and computer program product for scheduling dedicated processing resources
CN108347377B (en) Data forwarding method and device
CN113904923A (en) Service function chain joint optimization method based on software defined network
CN110851235A (en) Virtual network function deployment method suitable for multidimensional resource optimization configuration
CN107113323B (en) Data storage method, device and system
CN111459650A (en) Method, apparatus and computer program product for managing memory of dedicated processing resources
CN115834466B (en) Method, device, equipment, system and storage medium for analyzing path of computing power network
CN109963316B (en) Multipath routing method and equipment for mobile satellite network
CN110019481A (en) Memory database access method, device, equipment and medium
CN116304212A (en) Data processing system, method, equipment and storage medium
CN116150082A (en) Access method, device, chip, electronic equipment and storage medium
US20220121481A1 (en) Switch for managing service meshes
CN114079634B (en) Message forwarding method and device and computer readable storage medium
CN114881221A (en) Mapping scheme optimization method and device, electronic equipment and readable storage medium
CN108520025B (en) Service node determination method, device, equipment and medium
CN113395183A (en) Virtual node scheduling method and system for network simulation platform VLAN interconnection
CN113656046A (en) Application deployment method and device
CN113535388B (en) Task-oriented service function aggregation method
CN112231096A (en) Method, system, equipment and medium for task balancing of FPGA (field programmable Gate array) pooled resources
CN116248577B (en) Method and device for determining calculation force node
CN110960858A (en) Game resource processing method, device, equipment and storage medium
CN112019368B (en) VNF migration method, VNF migration device and VNF migration storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant