CN117938735A - Method and device for determining forwarding path - Google Patents

Method and device for determining forwarding path Download PDF

Info

Publication number
CN117938735A
CN117938735A CN202211315882.1A CN202211315882A CN117938735A CN 117938735 A CN117938735 A CN 117938735A CN 202211315882 A CN202211315882 A CN 202211315882A CN 117938735 A CN117938735 A CN 117938735A
Authority
CN
China
Prior art keywords
path
preset
linear combination
node
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211315882.1A
Other languages
Chinese (zh)
Inventor
王鑫
崔文琦
刘紫琪
王宇辰
王阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211315882.1A priority Critical patent/CN117938735A/en
Publication of CN117938735A publication Critical patent/CN117938735A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a method and equipment for determining a forwarding path, wherein the method comprises the following steps: receiving a preset imputation-path request and a preset path, wherein the preset path request comprises a first starting node, a first ending node and a first path calculation factor of the preset path, and the preset path is obtained according to an instruction; obtaining a pre-imputation-path algorithm according to the pre-set path request and the pre-set path; receiving a path calculation request, wherein the path calculation request comprises a requested second starting node, a requested second ending node and a requested second path calculation factor; and if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node, determining a forwarding path according to the second path calculation factor and the preset path calculation algorithm. The forwarding path meeting the path calculation request and the user requirement can be provided.

Description

Method and device for determining forwarding path
Technical Field
The present application relates to the field of communications, and in particular, to a method and apparatus for determining a forwarding path.
Background
With the development of communication technology, for a communication network scenario in which a control device centrally calculates a forwarding path, for example, a path calculation unit (Path Computation Element, PCE) centrally calculates a path architecture based on multiprotocol label switching (Multiple Protocol Label Switch, MPLS) traffic engineering (TRAFFIC ENGINEERING, TE) technology; and a software defined network (Software Defined Networking, SDN) architecture based on the control forwarding separation concept, a centralized control device (e.g., a PCE device or SDN controller) may calculate a path that satisfies the request from a path computation service request, i.e., a computation path request, based on a centrally managed network topology and necessary constraints.
In practical deployment application, the path calculated by the centralized control device can meet the path calculation request, but is not necessarily a forwarding path meeting the requirement of the user.
Disclosure of Invention
The application provides a method and equipment for determining a forwarding path, which can provide the forwarding path meeting the path calculation request and meeting the user requirement.
In a first aspect, the present application provides a method for determining a forwarding path, including: receiving a preset imputation-path request and a preset path, wherein the preset path request comprises a first starting node, a first ending node and a first path calculation factor of the preset path, and the preset path is obtained according to an instruction; obtaining a pre-imputation-path algorithm according to the pre-set path request and the pre-set path; receiving a path calculation request, wherein the path calculation request comprises a requested second starting node, a requested second ending node and a requested second path calculation factor; and if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node, determining a forwarding path according to the second path calculation factor and the preset path calculation algorithm.
The method for determining the forwarding path can be applied to a communication network and implemented by a centralized control device, wherein the centralized control device can store a network topology diagram of the communication network which is managed in a centralized way, the centralized control device receives a preset path calculation request of a user, solves the path from a first starting node to a first ending node according to the request, the solving algorithm can comprise various solving algorithms of the shortest path, the obtained shortest path and the centralized control device receive the preset path in comparison calculation, and when the shortest path is the same as the preset path or is closest to the preset path, the weight of each path calculation factor of the first path calculation factor is obtained, and the path calculation algorithm is optimized according to the weight to obtain a pre-imputation path algorithm. The centralized control device can store a plurality of preset routing algorithms corresponding to different routing factors from a first starting node to a first ending node corresponding to different network topological graphs, after the centralized control device is deployed, if a routing request of a user is received in a formal use process, the first starting node which is the same as the first starting node can be determined according to a second starting node in the routing request, the first ending node which is the same as the first ending node can be determined according to the second ending node, the first routing factor which is the same as the first ending node can be determined according to the second routing factor, or the first routing factor which comprises the second routing factor can be correspondingly determined, a pre-imputation routing algorithm can be correspondingly determined, and a forwarding path is obtained according to the second starting node, the second ending node, the second routing factor and the preset routing algorithm, wherein the forwarding path is obtained based on the pre-imputation routing algorithm, and the forwarding path obtained by using the same preset routing algorithm is a path which meets the user requirement.
Further, the path calculation request at least carries the start node and the end node of the requested path, and may further carry the path calculation factor, such as information indicating the corresponding path calculation factor, where the path calculation factor may be further represented by a cost (cost) value, where the cost value includes one or more network parameters such as jitter, delay, packet loss rate, maximum hop count, and bandwidth of the path.
In a possible implementation manner, the preset path is obtained according to one or several of a path, resource allocation, the first path calculation factor and expert experience in a network topology map, or the preset path is obtained according to a user instruction, wherein the network topology map comprises the first starting node, the second starting node, the first ending node and the second ending node.
In one possible implementation manner, the preset path algorithm is updated when a difference value between the forwarding path and the preset path is greater than a preset threshold value.
The pre-imputation path algorithm can be updated before equipment deployment or manually determined to be updated when the path calculation result is determined to be inaccurate, for example, the pre-imputation path algorithm determines that the path calculation result is inaccurate when the difference value between the forwarding path and the preset path is greater than a preset threshold value, and updates the path calculation result.
In one possible implementation manner, the obtaining the pre-imputation paths of algorithm according to the preset path request and the preset path includes: determining each road factor in the first road factors as the same priority; obtaining the current linear combination weight of each path factor in the same priority through a gradient descent iterative algorithm; and if the current linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the current linear combination weight.
In one possible implementation manner, if it is determined that the first linear combination weight does not reach the accuracy corresponding to the preset path, splitting the priority of each path calculation factor in the same priority into two single priorities, where the first linear combination weight is the current linear combination weight, and the split priority is lower than the priority before splitting; obtaining a second linear combination weight of each path factor in the single priority through a gradient descent iterative algorithm; and if the second linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the second linear combination weight.
In one possible implementation manner, if it is determined that the current linear combination weight does not reach the accuracy corresponding to the preset path, and there are multiple groups of the current linear combination weights, the priority of each path calculation factor of a group with the largest weight in the current linear combination weights is split.
After receiving the pre-imputation paths of requests, the device can perform calculation according to the requests, for each preset path, according to a first starting node and a first ending node for obtaining the preset path, a pre-stored algorithm for calculating the shortest path is utilized, under the current weight, such as the current linear combination weight, a shortest path is obtained, the shortest path is compared with the preset path to obtain a difference value, the pre-stored path is stored in a path set, the path set comprises a plurality of preset paths, each preset path corresponds to one calculated shortest path, finally, whether the current linear combination weight reaches the accuracy corresponding to the preset path is measured through the sum of a plurality of groups of difference values, if the sum of the difference values is smaller than the preset value, the current linear combination weight reaches the standard, otherwise, the current linear combination weight does not reach the standard. If the priority of each path calculation factor in the same priority is not up to standard, dividing the priority of each path calculation factor into two single priorities, wherein the divided priority is the next level of the priority before division, judging whether the linear combination weight of each path calculation factor reaches the accuracy corresponding to a preset path in the single priority after division, if so, optimizing according to the reached linear combination weight to obtain a pre-imputation path algorithm, otherwise, dividing the priority, and circulating according to the dividing result.
In one possible implementation manner, after receiving the pre-imputation paths of requests and the preset paths, the method further includes: obtaining a comprehensive score through reverse reinforcement learning according to the preset calculation path request and the preset path; after receiving the calculation request, the method further comprises: and obtaining a path conforming to the comprehensive score according to the path calculation request, and determining the path as the forwarding path.
After the path calculation request of the user is input into the network through reverse reinforcement learning, a forwarding path can be generated according to the result of reaching the comprehensive score, the forwarding path is obtained corresponding to the path calculation request, is identical to or closest to the preset path, and the output result is confirmed as the forwarding path.
In a second aspect, the present application provides an apparatus comprising: the receiving module is used for receiving a pre-imputation-path request and a preset path, wherein the preset path calculation request comprises a first starting node, a first ending node and a first path calculation factor of the preset path, and the preset path is obtained according to the indication; the processing module is used for obtaining a pre-imputation-path algorithm according to the pre-set path request and the pre-set path; the receiving module is further configured to receive a path calculation request, where the path calculation request includes a requested second starting node, a requested second ending node, and a requested second path calculation factor; the processing module is further configured to determine a forwarding path according to the second path calculation factor and the preset path calculation algorithm if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node.
In a possible implementation manner, the preset path is obtained according to one or several of a path, resource allocation, the first path calculation factor and expert experience in a network topology map, or the preset path is obtained according to a user instruction, wherein the network topology map comprises the first starting node, the second starting node, the first ending node and the second ending node.
In a possible implementation manner, the processing module is further configured to update the preset path algorithm when a difference value between the forwarding path and the preset path is greater than a preset threshold.
In a possible implementation manner, the processing module is specifically configured to determine each of the first road factors as the same priority; obtaining the current linear combination weight of each path factor in the same priority through a gradient descent iterative algorithm; and if the current linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the current linear combination weight. .
In one possible implementation manner, the processing module is specifically configured to split the priority of each path calculation factor in the same priority into two single priorities if it is determined that the first linear combination weight does not reach the accuracy corresponding to the preset path, where the first linear combination weight is the current linear combination weight, and the split priority is lower than the priority before splitting; obtaining a second linear combination weight of each path factor in the single priority through a gradient descent iterative algorithm; and if the second linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the second linear combination weight.
In one possible implementation manner, the processing module is specifically configured to split the priority of each path calculation factor of a group with the largest weight in the current linear combination weights if it is determined that the current linear combination weights do not reach the accuracy corresponding to the preset paths and there are multiple groups of the current linear combination weights.
In one possible implementation, the method further includes: the learning module is used for obtaining a comprehensive score through reverse reinforcement learning according to the preset calculation path request and the preset path; and after the receiving module receives the path calculation request, the learning module is further configured to obtain a path according with the comprehensive score according to the path calculation request, and determine the path as the forwarding path.
In a third aspect, the present application provides a network device comprising a communication interface for performing the method of any one of the preceding aspects and the transceiving operations involved in any one of the possible implementations of any one of the preceding aspects, and a processor for performing other operations than the transceiving operations involved in any one of the preceding aspects and the possible implementations of any one of the preceding aspects. For example, when the network device according to the third aspect is used as the control device to execute the method according to the first aspect, the communication interface is configured to receive a pre-imputation path request and a preset path, where the preset path request includes a first starting node, a first ending node, and a first path calculation factor of the preset path, and the preset path is obtained according to an instruction; the processor is used for obtaining a pre-imputation path algorithm according to the pre-set path request and the pre-set path; the communication interface is further configured to receive a path calculation request, where the path calculation request includes a requested second start node, a requested second end node, and a requested second path calculation factor; the processor is further configured to determine a forwarding path according to the second path calculation factor and the preset path calculation algorithm if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node.
In a fourth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a processor, implement a method as described in any one of the preceding aspects and some or all of the operations included in any one of the possible implementations of any one of the preceding aspects.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on a processor, implement the method of any one of the preceding aspects and some or all of the operations included in any one of the possible implementations of any one of the preceding aspects.
In a sixth aspect, the present application provides a chip comprising: an interface circuit and a processor. The interface circuit is coupled to the processor for causing the chip to perform some or all of the operations included in the method of any one of the preceding aspects and any possible implementation of any one of the preceding aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a PCE network provided in an embodiment of the present application;
Fig. 2 is a flow chart of a method for determining a forwarding path according to an embodiment of the present application;
FIG. 3 is a network topology provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of an optimization pre-imputation path algorithm provided by an embodiment of the application;
FIG. 5 is a schematic diagram of a network structure of a comprehensive score solving algorithm according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an apparatus for determining a forwarding path according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another apparatus for determining a forwarding path according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a network device 40 according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a network device 50 according to an embodiment of the present application.
Detailed Description
In order to make the solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
The method for determining the forwarding path provided by the embodiment of the application can be applied to a communication network, and can be realized through various devices for determining the forwarding path, for example, in the communication network with centralized control and management capability, the device for determining the forwarding path can be centralized control equipment, in the centralized control equipment, a network topology diagram of the communication network which is centralized and managed by the device can be stored, or a path calculation factor of the communication network can be stored or received, the centralized control equipment receives a path calculation service request of a user, the path calculation service request is simply recorded as the path calculation request, and the forwarding path conforming to the path calculation request is calculated according to the corresponding path calculation factor in the path calculation request, for example, the forwarding path conforming to the shortest path between a starting node and a termination node of the path calculation request is calculated. The path calculation request at least carries the start node and the end node of the forwarding path of the request, and can also carry the path calculation factor or information indicating the corresponding path calculation factor, and the like, further, the path calculation factor can be represented by a cost (cost) value, and the cost value comprises one or more network parameters such as jitter, time delay, packet loss rate, maximum hop count, path bandwidth, and the like. For example, if the cost value in the path calculation request only includes the maximum hop count, and the maximum hop count is defined as 3 hops, the path with hop count less than or equal to 3 from the start node to the end node among the forwarding paths obtained after the path calculation is solved satisfies the path calculation request. In general, the Cost value may be obtained by a plurality of network parameters according to different weights based on path requirements, for example, the Cost value includes jitter, delay, and packet loss rate, and the calculation factor in the calculation request may be represented by the Cost value, where a is the weight of the packet loss rate, B is the weight of the delay, C is the weight of the jitter, and the forwarding path obtained by the calculation request is obtained based on the a-x packet loss rate, B-x delay, and C-x jitter. In one possible implementation manner, the calculation request may carry an upper limit of a cost value of a forwarding path of the request and be recorded as a specified cost value, in this case, the cost value of a path calculated by the centralized control device should be smaller than the specified cost value, in some scenarios, paths which are obtained by calculation and conform to less than the specified cost value may be multiple, for example, the specified cost value is 20, a calculation factor in the calculation request includes a packet loss rate, a delay and jitter, when the paths are calculated, the paths may be paths 1 which conform to the calculation request according to the packet loss rate and are smaller than or equal to 21%, a=10, the delay is smaller than or equal to 15 ms, b=8, the jitter is smaller than 12%, c=3 obtains a path 1 which corresponds to the cost=19, and then according to the packet loss rate and be smaller than or equal to 20%, a=9, the delay is smaller than or equal to 13, b=7, the jitter is smaller than 10%, and c=5 obtains a path 2 which conforms to less than the specified cost value and thus both the path 1 and the path 2 conform to the path 2. If only one path is selected empirically or randomly to be acknowledged as a forwarding path, it may happen that the acknowledged forwarding path is path 1, but path 2 is the forwarding path that best meets the needs of the user. In an actual use scene, under the condition that the duty ratio of the weights of different network parameters in the paths meeting the user demands cannot be accurately obtained, once the paths are solved to obtain a plurality of paths meeting the path requests, the problem that the forwarding paths meet the path requests but are not the forwarding paths most meeting the user demands is solved by experience or random forwarding paths is solved, the method for determining the forwarding paths is provided, the weights of different path factors, namely the weights of the network parameters, can be accurately obtained through corresponding calculation of preset paths determined according to the user demands in advance, and when the path requests are received, the equipment can calculate the paths according to the preset path algorithm optimized by the determined weights, and the obtained forwarding paths are paths most meeting the user demands or are most approaching to the paths most meeting the user demands. In the method, an example is applied to a PCE network, where the PCE network is a centralized path computation architecture based on MPLS TE technology, as shown in fig. 1, fig. 1 is a schematic structure diagram of a PCE network provided by an embodiment of the present application, where the PCE network includes a PCE 10, a path computation client (Path Computation Client, PCC) 20 and other nodes, where the PCE 10 is a complete party of path computation, a network topology map of the PCE network and corresponding path information of the PCE network are stored, the PCC 20 is an initiator of a path computation request, typically, the PCC 20 is deployed on an Ingress (Ingress) node of the PCE network, and communication is performed between the PCE 10 and the PCC 20 through a path computation communication protocol (Path Computation Element Communication Protocol, PCEP), for example, the PCE 20 sends a PECP Report message to the PCE 10, that is the path computation request is carried by the message, and after the PCE 10 receives the path computation request, the entire network resource condition of the PCE network and the stored path information of the PCE network, and a pre-optimized path computation algorithm are stored, and the path is updated according to the pre-optimized path computation algorithm, and the path is further established to the path is forwarded to the path 20, and the path is further forwarded to the path is established according to the path update, such as the path is established by the path update, and the path is further the path is established by the path 20.
Further, fig. 2 is a flow chart of a method for determining a forwarding path according to an embodiment of the present application, as shown in fig. 2, the flow for determining a forwarding path includes a preparation flow and a formal use flow, where the preparation flow includes S101 and S102, the preparation flow may be updated before equipment deployment, or the preparation flow may be manually determined to be updated when it is determined that a path calculation result is inaccurate, for example, a pre-imputation path algorithm determines that the path calculation result is inaccurate when a difference value between the forwarding path and a preset path is greater than a preset threshold, and updates the path calculation result. The formal use flow includes S103 and S104. The method comprises the following steps:
S101, the device receives a pre-imputation path request and a preset path.
The user inputs a batch of executable preset paths and network topology diagrams based on the requirements of the network topology diagrams and the calculation paths, wherein the user can be a user of the equipment or an expert with more optimal experience on the paths, resources and the like of the calculation paths, the preset paths can be recorded as expert paths, the expert paths are paths which are the paths which are obtained by corresponding to the network topology diagrams and the preset calculation path requests and are the best paths according to the requirements of the user, the best paths can be judged by adapting the paths, the resource allocation of the network topology diagrams according to the requirements of the user, the first calculation path factors, the experiences of the expert and the like are comprehensively judged, for example, the paths with the highest scores are comprehensively scored according to the factors, or the paths which are selected by the expert according to the experience are the best paths. The pre-imputation path request comprises an initial node and a termination node of a forwarding path to be obtained in a network topological graph, which are respectively marked as a first initial node and a first termination node, and the path calculation request also comprises or indicates a first path calculation factor, wherein the first path calculation factor can comprise one or more network parameters such as jitter, time delay, hop count, packet loss rate and the like, the first initial node can be an inlet node, and the first termination node can be other nodes except the inlet node in the network topological graph, such as an outlet node and the like.
For example, fig. 3 is a network topology diagram provided in the embodiment of the present application, assuming that a pre-imputation path request is a request to calculate a forwarding path from node 1 to node 5, referring to fig. 3, in the case that multiple paths all conform to the starting node 1 and the terminating node 5, an optimal forwarding path may be selected based on the path of the network topology diagram, the resource allocation, the first path factor and expert experience, if the selected path is from node 1 to node 4 to node 5, the path is a preset path, and may be denoted as a preset path 1, the preset path 1 may be stored in a path set, and the corresponding first path factors corresponding to the preset path 1, such as the corresponding first path factors including three network parameters of jitter, delay and packet loss rate, are also stored correspondingly. The pre-imputation paths of requests need to calculate a forwarding path from node 1 to node 3, then, paths from node 1 to node 2 to node 3, from node 1 to node 4 to node 3, etc. all conform to the path of the initial node 1, end node 3, and the optimal forwarding path can be selected based on the path, resource allocation, path calculation factor and expert experience of the network topology, if the selected path is from node 1 to node 2 to node 3, the path is a preset path, which can be recorded as preset path 2, the preset path 2 can also be stored in a path set, further, when the preset path is pre-stored, each network parameter in the first path calculation factor of the preset path can also be correspondingly stored, and so on, a path set corresponding to the network topology is obtained, the path set includes preset paths corresponding to each pre-imputation paths of requests, and in a possible implementation manner, the path set can correspondingly store a plurality of preset paths corresponding to different network topologies respectively.
S102, the device obtains a pre-imputation path algorithm according to the pre-imputation path request and the preset path.
After receiving the preset path request and the preset path input by the user, the device may optimize the path calculation algorithm according to the preset imputation path request and the corresponding preset path (i.e. the expert path) to obtain a preset imputation path algorithm, for example, the device may calculate a preset path request of a forwarding path from the node 1 to the node 5 and the corresponding preset path 1 according to a requirement to obtain a weight of a first path factor, referring to the previous example, to obtain a linear combination weight of jitter, delay and packet loss rate, and the other preset paths are the same, and using the obtained linear combination weight corresponding to the first path factor to optimize the path calculation algorithm to obtain a preset imputation path algorithm, so that in a subsequent formal use process, the device may determine the preset path algorithm corresponding to the same first starting node and the first ending node according to the second starting node and the second ending node in the path request, and obtain the forwarding path closest to the preset path according to the requirement of the preset path through solving the path algorithm, thereby obtaining the optimal path meeting the requirement of the preset path.
In some examples, the device may obtain a pre-imputation path algorithm according to the pre-imputation path request and the preset path in the preparation flow, in the formal use flow, the device receives the path calculation request again, determines the forwarding path through the preset path calculation algorithm, in some examples, the calculation steps shown in the preparation flow as above example may be used to obtain a learning result through training of the neural network, for example, the learning result may be a comprehensive score of the preset path, and in the formal use flow, after the device inputs the received path calculation request, the forwarding path capable of achieving the comprehensive score is obtained, that is, the forwarding path may be determined.
Further, referring to the network topology diagram in fig. 3, the path of the request corresponding to the preset path request starts from the node 1 to the node 5, the first path calculation factor that needs to be considered when the preset path request requests the path calculation includes three network parameters of jitter, delay and packet loss rate, and referring to the above example, the determined expert path is the preset path 1, namely from the node 1 to the node 4 to the node 5. Because the user can acquire the pre-imputation path request and can also determine the preset path 1, but the user cannot determine how much the weight value of each path calculation factor is when the preset path 1 is correspondingly acquired through the pre-imputation path request, so that the path cannot be calculated from the node 1 to the node 5 in the formal use flow, the second path calculation factor to be considered comprises the path calculation request including three network parameters of jitter, time delay and packet loss rate, and the forwarding path closest to the preset path 1 is acquired, therefore, in the preparation flow, the equipment is required to acquire the weight value of each path calculation factor when the preset path 1 is acquired through the pre-imputation path request and the first path calculation factor in advance, and then the pre-imputation path algorithm is acquired through the weight value optimization algorithm, and in the formal flow, the equipment can calculate through the preset path calculation algorithm when receiving the path calculation request, so as to acquire the forwarding path which is equal to or closest to the preset path 1. The method for optimizing the pre-imputation path algorithm is shown in fig. 4, and fig. 4 is a schematic flow chart of optimizing the pre-imputation path algorithm provided by the embodiment of the application, including:
s201, the device determines the priority of each road factor in the first road factors as the same priority.
For example, in the case where the path of the request corresponding to the preset path request is from the node 1 to the node 5 and the corresponding expert path is the preset path 1, the first path calculation factor includes jitter, delay and packet loss rate, the device sets the jitter to a1, delay to a2 and packet loss rate to a3, where the priorities of a1, a2 and a3 may be all initialized to the highest priority, that is, the same priority, and the first path calculation factor may be denoted as (a 1, a2 and a 3).
S202, the device solves the current linear combination weight of each path factor in the same priority through a gradient descent iterative algorithm.
The device may set the first starting node of the path to be src1, the first ending node to be dst1, the path passing through m1 sides, that is, (1, 2 … … m 1) sides between the starting point src1 and the ending point dst1, where m1 is a positive integer greater than 1, according to the pre-imputation path request, in one possible implementation, the device may calculate the first starting node to be src1 according to a diskla algorithm (Dijkstra algorithm), an a-Star algoritm) bellman-ford algorithm (bellman-ford algoritm) and other algorithms for calculating the shortest path, where the first ending node is the shortest path of dst1, for example, using Dijkstra algorithm to calculate the shortest path expert pathi that is the same starting node and ending node as the path, and the accumulated profit expert pathi may be recorded as Reward pathi, as shown in equation 1:
Wherein Reward pathi is the cumulative gain of the path, a is the weight, W is the path factor, i is determined according to each path factor of the first path factor, for example, the first path factor in the preset path request includes jitter a1, delay a2 and packet loss rate a3, where i=3, awi can be denoted as a (a 1, a2, a 3).
Since the expert path corresponding to the preset calculation request is determined, the accumulated benefits of the expert path are recorded asIn order to obtain a path closest to the expert path, such as the path pathi provided in formula 1, it is necessary to compare the accumulated benefits of the path with the accumulated benefits of the expert path, and the smaller the absolute difference between the accumulated benefits of the path and the accumulated benefits of the expert path is, the more the path is closest to the expert path, so the calculation can be performed according to the gradient descent iterative algorithm in formula 2 until the path closest to the expert path is obtained, and the weight a is determined, and when the number of calculation factors is multiple, the weight a is the linear combination weight of each calculation factor.
Wherein Reward pathi is the cumulative benefit of the pathi path in equation 1,Is the accumulated benefit of expert path,/>The value of the variable a when equation 2 reaches the minimum value. n is a positive integer greater than 1,/>Refers to the difference between the accumulated returns of n expert paths and the path calculated by the algorithm in equation 1. Further, the path factors may be normalized, such as in units of jitter, delay, and packet loss rate, prior to calculation by equations 1 and 2.
S203, the device judges whether the current linear combination weight reaches the accuracy of a preset path on the path set.
Further, after the device completes S202 execution, that is, the device inputs each expert path according to an algorithm for obtaining the expert path, for example, inputs a first starting node and a first ending node, and obtains a shortest path under a current weight, for example, a linear combination weight, by using Dijkstra algorithm, compares the shortest path with the expert path to obtain a difference value, stores n expert paths in a path set, each expert path corresponds to a calculated shortest path, and finally, the device measures whether the current linear combination weight meets the standard by the sum of n groups of difference values, if the current linear combination weight meets the standard, that is, the accuracy of a preset path on a path set is reached, S204 is executed, otherwise, S205 is executed. For example, if the first path calculation factor is only one, such as time delay, the current weight is A1, the device uses Dijkstra algorithm to calculate a shortest path 3 under the condition that the weight of time delay is A1, and obtains a difference value by comparing the shortest path 3 with expert paths, and so on, n expert paths may be stored in the path set, and the n expert paths respectively correspond to the shortest paths, and the device determines whether the current weight meets the standard by whether the sum of n groups of difference values is smaller than a predetermined value, if the sum of the difference values is smaller than the predetermined value, the current weight meets the standard, otherwise, the current weight does not meet the standard. Similarly, when the first path calculation factor comprises time delay, packet loss rate and jitter, the weights are obtained by weighting the path calculation factors in a linear combination mode, and the device measures whether the current linear combination weight meets the standard or not by judging whether the sum of n groups of difference values is smaller than a preset value or not, wherein the difference comparison of the linear combination weights can be respectively compared with the weights of different path calculation factors.
S204, the device obtains the priority of the first path factor and the linear combination weight thereof.
When the current linear combination weight reaches the standard, the device confirms the priority of each current road factor as the priority of the first road factor, and the priority and the linear weight can be used for correspondingly optimizing the road algorithm to obtain a pre-imputation road algorithm. Further, if the accuracy is achieved after the priority of the first road factor is split multiple times, the obtained priorities of the road factors may be different.
S207 is performed after S204.
S205, the equipment sorts the linear combination weights in the same priority, selects a group with the largest linear combination weight, and splits the priority of each path calculation factor into two single priorities.
Wherein, the single priority refers to the same priority.
S206, the device calculates the linear combination weight in the single priority level by priority level.
In general, the first path-calculating factor includes multiple network parameters, and thus, taking the current weight as the linear combination weight as an example to describe the first path-calculating factor, in order to distinguish different linear combination weights, the current linear combination weight is recorded as the first linear combination weight, if the first linear combination weight is determined not to reach the standard, the device splits the priority of each path-calculating factor in the same priority into two single priorities, wherein the priority after splitting is lower than the priority before splitting, for example, the priority after splitting is the next priority before splitting; and (3) obtaining second linear combination weights of all the path factors in the single priority after splitting, if the second linear combination weights are determined to be up to standard, determining the second linear combination weights as current weights, executing S204, if the second linear combination weights are not up to standard, sorting the second linear combination weights in the same priority according to the linear combination weights, selecting a group with the largest linear combination weights, splitting the priority of all the path factors into two single priorities, obtaining third linear combination weights of all the path factors in the single priority after splitting, and if the third linear combination weights are determined to be up to standard, executing S204, and if the third linear combination weights are not up to standard, continuing executing S205, and circulating the steps. When different road factors have different priorities, the linear combination weights can be compared from a high priority, and when the high priority is the same, the linear combination weight of the next priority is judged. In general, the device calculates again through the algorithm of S202 according to the obtained linear combination weight, and then determines whether the accuracy of the expert path is reached through S203, if so, S204 is executed, and if not, S205 is executed.
S207, the equipment optimizes the algorithm according to the linear combination weight and the corresponding priority, and stores the linear combination weight and the corresponding priority as a preset algorithm.
The pre-imputation path algorithm can output and display the result to a user through a front end display device and the like, so that the user can confirm whether the result is used for optimizing the algorithm and storing the result or the device can automatically store the result as a preset path calculation algorithm, and when the device receives a path calculation request in a formal use flow, the preset algorithm can be used for calculating paths.
The formal use flow includes S103 and S104.
S103, the equipment receives the road calculation request.
The path calculation request comprises a second starting node and a second ending node of the forwarding path required to be obtained in the network topological graph, and also comprises or indicates a second path calculation factor, wherein the second path calculation factor can comprise network parameters such as jitter, time delay, hop count, packet loss rate and the like. The device may match the same preset path calculation algorithm corresponding to the first starting node and the first ending node according to the second starting node and the second ending node after receiving the path calculation request, further may select a preset path corresponding to the first path calculation factor identical or similar to the second path calculation factor in the matched preset paths according to the second path calculation factor, and calculate a path according to the preset path calculation algorithm of the preset path, for example, the second path calculation factor includes a first path calculation factor including a second path calculation factor, and the like, for example, the second path calculation factor may select the first path calculation factor as a jitter, a delay, or the first path calculation factor as a preset path calculation algorithm with jitter, delay, and packet loss ratio being optimized, and in one possible implementation manner, if the second path calculation factor includes a jitter, delay, and packet loss ratio, but the first path calculation factor corresponding to the same starting node and ending node includes only a jitter, delay, and a hop count, or the first path calculation algorithm corresponding to the first path calculation factor may be selected to perform a path calculation.
Further, during calculation, the resource loading related to the calculation request is needed, and the device can read related data and resources required by the calculation process from the database.
S104, the device determines a forwarding path according to the pre-imputation path algorithm and the path calculation request.
The device calculates the second path factor according to the pre-imputation path algorithm, namely, the weights corresponding to the pre-set path algorithm, such as the linear combination weights, respectively, so as to obtain paths and determine the paths as forwarding paths. For example, the packet loss rate may be obtained by dithering d+f+e corresponding to the pre-imputation path algorithm, the weight of the packet loss rate is D, the weight of the delay is F, and the weight of the dithering is E, so that the second path calculation factors also include the packet loss rate, the delay and the dithering, and when the second path calculation factors correspond to the second path calculation factors by using the linear combination weights respectively, the forwarding path is calculated, and further, if the second path calculation factors include only the packet loss rate and the delay, the weight of the packet loss rate and the delay may also be calculated by using the weight of the packet loss rate and the delay as D.
In some examples, steps S201 to S207 of obtaining the pre-imputation path algorithm in S102 may be implemented by obtaining a composite score through reverse reinforcement learning, that is, the device may calculate the forwarding path using the composite score as the pre-imputation path algorithm. Referring to fig. 5, for example, fig. 5 is a network structure schematic diagram of a comprehensive score solving algorithm provided by the embodiment of the present application, as shown in fig. 5, a comprehensive score may be solved in a neural network, where the neural network includes a policy neural network 101 and a classification model 102, the structure of the policy neural network 101 may be a fully connected neural network, a circulating neural network (Recurrent Neural Network, RNN), a Long Short-Term Memory (LSTM), and the like, and the classification model 102 may be a logistic regression (logistics regression), a support vector machine (Support Vector Machine, SVM), and the like. In the neural network, the policy neural network 101 and the classification model 102 can be trained according to the first path factor of the pre-imputation path request and the corresponding preset path, wherein the training purpose is to obtain the forwarding path closest to the preset path (expert path), and the flow of generating the path by the policy neural network 101 is as follows: inputting an initial network topology state S0, outputting a0 in the state by the policy neural network 101, wherein a0 is a next hop node in a network topology diagram of the initial state S0, then changing the network topology state into S1, outputting a1 by the policy neural network 101 according to S1, outputting a1 by the policy neural network 101, repeating the steps until a path is obtained, wherein after predicting which node is the next hop by the policy neural network 101, determining whether the node is selected correctly or not by the classification model 102 according to a preset path, determining whether the method is correct or not according to the preset path can refer to the above examples S201 to S203, when the method is realized, predicting the next hop node by the policy neural network 101 by decomposing each expert path into a combination of triplets (S, a, S'), judging whether the node of each triplet is correct or not by the classification model 102, if the node predicted by the two classification model 102 is incorrect, deleting the node, re-predicting by the policy neural network 101, training the policy neural network 101 and the two classification model 102 according to a plurality of preset paths, and outputting a comprehensive score after converging. The calculation of the composite score reward may be referred to as equation 3:
r(s,a,s)=logD(s,a,s′)-logD[1-D(s,a,s′)]
Equation 3
Where r is reorder, S is the existing state, i.e. the node now in and all the node information previously selected, a is the action (which node is selected by the next hop of the path), S' is the state after action a (the node selected by a and all the node information previously selected), and D represents the classification model 102.
In the formal use flow, after the trained policy neural network 101 is input to the path calculation request of the user, the policy neural network 101 and the two classification models 102 may generate a forwarding path according to the result of reaching the comprehensive score, where the forwarding path is obtained from the path calculation request, is the forwarding path that is the same as or closest to the preset path, and the device output result is confirmed to be the forwarding path.
In the preparation process, the embodiment of the application can obtain the preset algorithm which can calculate the path closest to the preset path, the preset algorithm can be used in a mode of calculating the accurate weight, or in a mode of comprehensively scoring, no matter which method is used for optimizing to obtain the pre-imputation algorithm, in the formal use process, the equipment can send the received path calculation request, and the forwarding path which is most in line with the user requirement is obtained through the preset algorithm.
Fig. 6 is a schematic structural diagram of an apparatus for determining a forwarding path according to an embodiment of the present application, as shown in fig. 6, the apparatus 30 includes:
The receiving module 301 is configured to receive a preset imputation-path request and a preset path, where the preset path request includes a first starting node, a first ending node, and a first path calculation factor of the preset path, and the preset path is obtained according to an instruction; the processing module 302 is configured to obtain a pre-imputation path algorithm according to the pre-imputation paths of requests and the preset path; the receiving module 301 is further configured to receive a path calculation request, where the path calculation request includes a requested second start node, a requested second end node, and a requested second path calculation factor; the processing module 302 is further configured to determine a forwarding path according to the second path calculation factor and a preset path calculation algorithm if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node.
In one possible implementation manner, the preset path is obtained according to one or more of a path, resource allocation, a first path calculation factor and expert experience in a network topology map, or the preset path is obtained according to a user instruction, wherein the network topology map comprises a first starting node, a second starting node, a first ending node and a second ending node.
In a possible implementation manner, the processing module 302 is further configured to update the pre-imputation paths algorithm when a difference value between the forwarding path and the preset path is greater than a preset threshold.
In one possible implementation manner, the processing module 302 is configured to obtain, through a gradient descent iterative algorithm, a current linear combination weight of each path factor in the same priority; and if the current linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining a pre-imputation path algorithm according to the current linear combination weight.
In one possible implementation manner, the processing module 302 is specifically configured to split the priority of each path calculation factor in the same priority into two single priorities if it is determined that the first linear combination weight does not reach the accuracy corresponding to the preset path, where the first linear combination weight is the current linear combination weight, and the split priority is lower than the priority before splitting; obtaining a second linear combination weight of each path factor in the single priority through a gradient descent iterative algorithm; and if the second linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining a pre-imputation path algorithm according to the second linear combination weight.
In one possible implementation manner, the processing module 302 is specifically configured to split the priority of each path factor of a group with the largest weight in the current linear combination weights if it is determined that the current linear combination weights do not reach the accuracy corresponding to the preset path, and there are multiple groups of the current linear combination weights.
Fig. 7 is a schematic structural diagram of another apparatus for determining a forwarding path according to an embodiment of the present application, in one possible implementation manner, as shown in fig. 7, the apparatus 30 further includes: the learning module 303 is configured to obtain a comprehensive score through reverse reinforcement learning according to the preset imputation paths of requests and the preset paths; after the receiving module 301 receives the path calculation request, the learning module 303 is further configured to obtain a path according to the path calculation request, and determine the path as a forwarding path.
When the device 30 is used to perform the above method, it may be applied in the application scenario shown in fig. 1 or fig. 3, for example, the PCE 10 in the scenario shown in fig. 1. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The functional modules in the embodiment of the application can be integrated in one processing module, or each module can exist alone physically, or two or more modules can be integrated in one module. For example, in the above embodiment, the receiving module and the processing module may be the same module or may be different modules. The integrated modules may be implemented in hardware, such as a chip, or in software functional modules.
In addition, the embodiment of the present application further provides a network device 40, as shown in fig. 8, and fig. 8 is a schematic structural diagram of the network device 40 according to the embodiment of the present application. The network device 40 comprises a communication interface 401 and a processor 402 connected to the communication interface 401. The communication interface is for example a device such as a transceiver. The network device 40 may be used to perform the methods in the above embodiments. Specifically, the network device 40 may be provided to the ingress node to perform the operations performed by the ingress node in the methods S101 to S104 and S201 to S205. Wherein the communication interface 401 is used for performing transceiving operations by the ingress node in the method. The processor 402 is configured to perform operations other than the transceiving operations performed by the ingress node in the method. Wherein the communication interface 401 is used to perform the transceiving operations performed by the devices in the methods S101 to S104 and S201 to S207. The processor 402 is configured to perform operations other than the transceiving operations performed by the device in the methods S101 to S104 and S201 to S207.
In addition, the embodiment of the present application further provides a network device 50, as shown in fig. 9, and fig. 9 is a schematic structural diagram of the network device 50 according to the embodiment of the present application. As shown in fig. 9, the network device 50 may include a processor 501, a memory 502 and a transceiver 503 coupled to the processor 501. The transceiver 503 may be a communication interface, an optical module, etc. for receiving messages or data information, etc. The processor 501 may be a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP) or a combination of CPU and NP for performing the forwarding process related steps in the device as exemplified in the above embodiments. The processor may also be an application-specific integrated circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), or a combination thereof. The PLD may be a complex Programmable logic device (Complex Programmable Logic Device, CPLD), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof. Processor 501 may refer to one processor or may include multiple processors. The Memory 502 may include volatile Memory (RAM), such as Random-Access Memory (RAM); the Memory may also include a non-volatile Memory (non-volatile Memory), such as Read-Only Memory (ROM), flash Memory (flash Memory), hard disk (HARD DISK DRIVE, HDD) or Solid state disk (Solid-state disk-STATE DRIVE, SSD); memory 502 may also include a combination of the types of memory described above. Memory 502 may refer to a single memory or may include multiple memories for storing a set of paths and storing program instructions. In one embodiment, memory 502 has stored therein computer readable instructions comprising a plurality of software modules, such as a transmit module, a process module, and a receive module. The processor 501, after executing the respective software modules, may perform corresponding operations as directed by the respective software modules. In the present embodiment, the operations performed by the software modules actually refer to operations performed by the processor 501 according to instructions of the software modules. Alternatively, the processor 501 may store program code or instructions for performing embodiments of the present application, in which case the processor 501 need not read the program code or instructions into the memory 502.
The network device 50 may be used to perform the methods in the above embodiments. Specifically, the network device 50 may be configured to perform the operations performed by the device in the methods S101 to S104 and S201 to S207 by the ingress node, where the transceiver 503 is configured to receive a pre-imputation path request and a pre-set path, where the pre-set path request includes a first starting node, a first ending node, and a first path calculation factor of the pre-set path, and the pre-set path is obtained according to the instruction; the computing system is also used for receiving a computing request, wherein the computing request comprises a second starting node, a second ending node and a second computing factor of the request; the processor 501 is configured to obtain a pre-imputation path algorithm according to the pre-imputation paths of requests and the preset path; and the forwarding path is determined according to the second path calculation factor and a preset path calculation algorithm if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node. Memory 502 is used to store a pre-imputation way algorithm.
Embodiments of the present application also provide a computer readable storage medium having instructions stored therein which, when executed on a processor, implement some or all of the operations in any of the methods of any of the previous embodiments.
Embodiments of the present application also provide a computer program product comprising a computer program which, when run on a processor, implements part or all of the operations in any of the methods of any of the preceding embodiments.
The embodiment of the application also provides a chip, which comprises: an interface circuit and a processor. The interface circuit is coupled to a processor for causing the chip to perform some or all of the operations in any of the methods of any of the embodiments described above.
The embodiment of the application also provides a chip system, which comprises: a processor coupled to the memory, the memory for storing programs or instructions that, when executed by the processor, cause the system-on-a-chip to perform part or all of the operations of any of the methods of any of the preceding embodiments.
Alternatively, the processor in the system-on-chip may be one or more. The processor may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor, implemented by reading software code stored in a memory.
Alternatively, the memory in the system-on-chip may be one or more. The memory may be integral with the processor or separate from the processor, and embodiments of the present application are not limited. The memory may be a non-transitory processor, such as a ROM, which may be integrated on the same chip as the processor, or may be separately provided on different chips, and the type of memory and the manner of providing the memory and the processor are not particularly limited in the embodiments of the present application.
The system-on-chip may be, for example, a field programmable gate array (Field Programmable GATE ARRAY, FPGA), an Application SPECIFIC INTEGRATED Circuit (ASIC), a system-on-chip (SoC), a CPU, a digital signal processing Circuit (DIGITAL SIGNAL processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chip.
The terms first, second, third and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (14)

1. A method of determining a forwarding path, comprising:
Receiving a preset imputation-path request and a preset path, wherein the preset path request comprises a first starting node, a first ending node and a first path calculation factor of the preset path, and the preset path is obtained according to an instruction;
obtaining a pre-imputation-path algorithm according to the pre-set path request and the pre-set path;
receiving a path calculation request, wherein the path calculation request comprises a requested second starting node, a requested second ending node and a requested second path calculation factor;
And if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node, determining a forwarding path according to the second path calculation factor and the preset path calculation algorithm.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The preset path is obtained according to one or more of a path, resource allocation, the first path calculation factor and expert experience in a network topology diagram, or the preset path is obtained according to a user instruction, wherein the network topology diagram comprises the first starting node, the second starting node, the first ending node and the second ending node.
3. A method according to claim 1 or2, characterized in that,
And updating the preset path algorithm when the difference value between the forwarding path and the preset path is larger than a preset threshold value.
4. A method according to any one of claims 1 to 3, wherein said deriving a pre-imputation-way algorithm from said pre-set routing request and said pre-set path comprises:
Determining each road factor in the first road factors as the same priority;
obtaining the current linear combination weight of each path factor in the same priority through a gradient descent iterative algorithm;
and if the current linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the current linear combination weight.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
If the first linear combination weight is determined to not reach the accuracy corresponding to the preset path, splitting the priority of each path calculation factor in the same priority into two single priorities, wherein the first linear combination weight is the current linear combination weight, and the split priority is lower than the priority before splitting;
Obtaining a second linear combination weight of each path factor in the single priority through a gradient descent iterative algorithm;
And if the second linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the second linear combination weight.
6. The method according to claim 4 or 5, wherein,
If the current linear combination weight is determined to not reach the accuracy corresponding to the preset path, and a plurality of groups of current linear combination weights exist, splitting the priority of each path calculation factor of a group with the largest weight in the current linear combination weights.
7. A method according to any one of claims 1 to 3, further comprising, after receiving the pre-imputation requests and the pre-set path:
obtaining a comprehensive score through reverse reinforcement learning according to the preset calculation path request and the preset path;
After receiving the calculation request, the method further comprises:
And obtaining a path conforming to the comprehensive score according to the path calculation request, and determining the path as the forwarding path.
8. An apparatus, comprising:
the receiving module is used for receiving a pre-imputation-path request and a preset path, wherein the preset path calculation request comprises a first starting node, a first ending node and a first path calculation factor of the preset path, and the preset path is obtained according to the indication;
The processing module is used for obtaining a pre-imputation-path algorithm according to the pre-set path request and the pre-set path;
The receiving module is further configured to receive a path calculation request, where the path calculation request includes a requested second starting node, a requested second ending node, and a requested second path calculation factor;
The processing module is further configured to determine a forwarding path according to the second path calculation factor and the preset path calculation algorithm if the second starting node is the same as the first starting node and the second ending node is the same as the first ending node.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
The preset path is obtained according to one or more of a path, resource allocation, the first path calculation factor and expert experience in a network topology diagram, or the preset path is obtained according to a user instruction, wherein the network topology diagram comprises the first starting node, the second starting node, the first ending node and the second ending node.
10. The apparatus according to claim 8 or 9, wherein,
The processing module is further configured to update the preset path algorithm when a difference value between the forwarding path and the preset path is greater than a preset threshold.
11. The apparatus according to any one of claims 9 to 10, wherein,
The processing module is specifically configured to determine each road factor in the first road factors as the same priority; obtaining the current linear combination weight of each path factor in the same priority through a gradient descent iterative algorithm; and if the current linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the current linear combination weight.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
The processing module is specifically configured to split the priority of each path calculation factor in the same priority into two single priorities if it is determined that the first linear combination weight does not reach the accuracy corresponding to the preset path, where the first linear combination weight is the current linear combination weight, and the split priority is lower than the priority before splitting; obtaining a second linear combination weight of each path factor in the single priority through a gradient descent iterative algorithm; and if the second linear combination weight is determined to reach the accuracy corresponding to the preset path, obtaining the preset path algorithm according to the second linear combination weight.
13. The apparatus according to claim 11 or 12, wherein,
The processing module is specifically configured to split the priority of each path factor of a group with the largest weight in the current linear combination weights if it is determined that the current linear combination weights do not reach the accuracy corresponding to the preset paths and there are multiple groups of the current linear combination weights.
14. The apparatus according to any one of claims 8 to 10, further comprising:
The learning module is used for obtaining a comprehensive score through reverse reinforcement learning according to the preset calculation path request and the preset path; and after the receiving module receives the path calculation request, the learning module is further configured to obtain a path according with the comprehensive score according to the path calculation request, and determine the path as the forwarding path.
CN202211315882.1A 2022-10-26 2022-10-26 Method and device for determining forwarding path Pending CN117938735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211315882.1A CN117938735A (en) 2022-10-26 2022-10-26 Method and device for determining forwarding path

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211315882.1A CN117938735A (en) 2022-10-26 2022-10-26 Method and device for determining forwarding path

Publications (1)

Publication Number Publication Date
CN117938735A true CN117938735A (en) 2024-04-26

Family

ID=90759906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211315882.1A Pending CN117938735A (en) 2022-10-26 2022-10-26 Method and device for determining forwarding path

Country Status (1)

Country Link
CN (1) CN117938735A (en)

Similar Documents

Publication Publication Date Title
CN108011817B (en) Method and system for redeploying power communication private network service route
US10187291B2 (en) Path planning method and controller
US8935142B2 (en) Simulation of communication networks
US9197495B1 (en) Determining locations of network failures
US9680665B2 (en) Apparatus and method for dynamic hybrid routing in SDN networks to avoid congestion and balance loads under changing traffic load
WO2022116957A1 (en) Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
US9210038B1 (en) Determining locations of network failures
CN110601973A (en) Route planning method, system, server and storage medium
CN110311863B (en) Routing path determination method and device
US20180324082A1 (en) Weight setting using inverse optimization
CN112511230B (en) Optimal optical fiber path selection method and device
CN114968573A (en) Computing resource scheduling method and device and computer readable storage medium
CN111181792A (en) SDN controller deployment method and device based on network topology and electronic equipment
CN108347377B (en) Data forwarding method and device
CN113472671B (en) Method, device and computer readable storage medium for determining multicast route
CN115208815A (en) Routing method and routing device
CN117499297B (en) Method and device for screening data packet transmission paths
CN111800352B (en) Service function chain deployment method and storage medium based on load balancing
CN117938735A (en) Method and device for determining forwarding path
CN116055406B (en) Training method and device for congestion window prediction model
US8812663B2 (en) Network delay estimation apparatus and a network delay estimation method
WO2016083845A1 (en) Determining bandwidth requirements for network services
WO2017101981A1 (en) A method for constructing srlg-disjoint paths under qos constraints
CN113900731B (en) Request processing method, device, equipment and storage medium
CN109039885B (en) Data transmission path selection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication